HUMAN ERROR AND RETRAINING: APPLYING THE

SYSTEM OF PROFOUND KNOWLEDGE TO

INVESTIGATIONS IN PHARMACEUTICAL

QUALITY CONTROL LABORATORIES

______

A Thesis

Presented to the Faculty of

California State University, Dominguez Hills

______

In Partial Fulfillment

of the Requirements for the Degree of

Master of Science

in

Quality Assurance

______

by

Marc R. Bell

Fall 2017 TABLE OF CONTENTS

PAGE

TABLE OF CONTENTS ...... ii LIST OF TABLES ...... iii LIST OF FIGURES ...... iv ABSTRACT ...... v CHAPTER 1. INTRODUCTION ...... 1

Background ...... 1 Statement of the Problem ...... 12 Purpose of the Study ...... 13 Theoretical Bases and Organization ...... 14 Limitation of the Study ...... 15 Definition of Terms...... 16 2. REVIEW OF THE LITERATURE ...... 19

Introduction ...... 19 Brief History of Quality Management ...... 20 of Profound Knowledge ...... 23 Appreciation for a System ...... 37 Psychology ...... 63 3. METHODOLOGY ...... 84 Design of the Study ...... 84 Data Analysis Procedure ...... 85 4. RESULTS AND DISCUSSION ...... 87 Discussion of Results ...... 91 5. SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS ...... 111

REFERENCES ...... 116

ii LIST OF TABLES

PAGE

1. Comparison of Analysis and Synthesis ...... 30

2. Summary of Literature Search Results...... 88

3. Evaluation of Literature Search Results ...... 90

iii LIST OF FIGURES

PAGE

1. Graphical Summary of the Evaluation of the Literature Search Results ...... 91

iv ABSTRACT

Remedial training is a common corrective action when failure investigations identify human error as a causal factor. Over time, however, human error persists, and

problems recur. Literature research supports the hypothesis that recurrence stems from

the inappropriate use of remedial training, which fails to comprehend two elements of

Deming’s System of Profound Knowledge: Appreciation for a System and Psychology.

This thesis seeks to identify a tactic to improve how pharmaceutical quality control

laboratories conduct failure investigations. To accomplish this, this thesis extracts

applicable discussion points from a literature review and uses them to evaluate twelve

proposed implements against Deming’s Appreciation for a System and Psychology.

Based on this evaluation, this thesis identifies four implements of interests, and then

summarizes elements common to each. These elements include adequately justified

conclusions, solicit honest feedback, adopt a system perspective, recognize bias and

evaluate established support, and promote organizational learning and improvement.

1 CHAPTER 1

INTRODUCTION

Best efforts and hard work, not guided by new knowledge …only dig deeper the pit that we are in. W. Edwards Deming

Human error and retraining are expansive topics not limited to pharmaceutical quality control (QC) laboratories. To clarify its scope, this thesis first summarizes the regulatory environment familiar to pharmaceutical QC laboratories and then a theoretical basis to confront the issue of using remedial training to resolve human error.

Background

Pharmaceutical products intended for sale in the United States (US) are subject to the regulations established by the US Food and Drug Administration (FDA). For the manufacture of pharmaceuticals for human use, the FDA requires laboratory testing to determine “satisfactory conformance to final specifications” (21 CFR 211.165 (a)). With respect to pharmaceutical QC laboratories, these tests may include the evaluation of physical appearance and color, chemical identity and purity, or biological indicators such as bacterial endotoxin count or product sterility. Concurrent with testing, the FDA requires that investigations be conducted for “any unexplained discrepancy” or when specifications are not met (21 CFR 211.192). Not meeting specifications include results determined to be out of specification (OOS) or out of tolerance (OOT). Unexplained

2 discrepancies include deviations from standard operating procedures (SOPs) and

instrument or software errors.

The FDA and Root Cause Analysis

As the investigation identifies problems, the laboratory or parent organization

applies short-term corrections in order to resume business operations. However, while

short-term corrections are vital to the success of a business, the underlying objective of

investigations is to determine the root causes of problems and prevent their recurrence

(U.S. Food and Drug Administration [FDA], 2009). A problem’s root causes are its

fundamental contributing factors “that can be reasonably identified and that management

must control” (Vinnem, Hestad, Kvaløy, & Skogdalen, 2010, p. 1142). Specific root

cause analysis (RCA) methods are “approaches, tools, and techniques” used to

investigate the root cause of problems (American Society for Quality [ASQ], n.d. e).

RCA methods vary in complexity, but they typically involve a documented approach to

brainstorming and analysis of results. Examples of RCA methods include the Five Whys

method of successive questioning, frequency prioritization via Pareto Analysis, and

causal factor trees such as Ishikawa’s Fishbone Diagram. Other approaches include the

Interrelationship Diagram to tally contributions of related factors, Current Reality Trees

linked by arrows of causation, the Kepner-Tregoe Is-Is Not approach, and complex, data-

driven approaches supported by proprietary software (Yuniarto, 2012). The complexity

of RCA methods varies, and they can work independently or combine to assess multiple

factors (Jayswal, Li, Zanwar, Lou, & Huang, 2011). A detailed evaluation of each

method is beyond the scope of this thesis. Nevertheless, when used properly, RCA

3 methods produce corrective actions that prevent the recurrence of problems and help demonstrate a “robust quality culture” (FDA, 2015, p. 12).

Corrective Actions and Remedial Training

When investigating problems influenced by human interaction, a common corrective action is used to provide awareness or remedial training for the operators. The primary argument for remedial training is that humans caused the problem (root cause); therefore, those humans need more training (corrective action). Responses to FDA audit observations include statements that demonstrate this approach as follows:

 “Isolated human error… the analyst and lab auditor have been counseled on this error” (Perrigo, 2008, p. 3 & 18).

 “All employees in dispensing have been retrained …relative to the dispensing process as referred in SOP…” (Caraco Pharmaceutical Laboratories LTD, 2009, p 3).

 The “technician who committed the error received awareness training” (Meridian Medical Technologies, 2012, p. 4).

The remedial training approach views human errors as “moral issues …[due to] aberrant mental processes such as forgetfulness, inattention, poor motivation, carelessness, negligence, and recklessness” (Reason, 2000, p. 768). Because of this, organizations focus their corrective actions on “reducing unwanted variability” (Reason,

2000, p. 768) by correcting the errant behavior with additional training, and employees who fail to assimilate the training eventually face termination (Reason, 1990/2003, pp.

211-212).

While not explicitly stated, “followers of [this approach] …assume that bad things happen to bad people,” (Reason, 2000, p. 768); therefore, retraining is a necessary

4 punishment for their mistakes. This assumption and the justification for punishment are supported by examples of companies spending considerable “attention and resources on training, education, competency, technology, procedures, and supervision to prevent a quality problem” only to be rewarded with employees who “knowingly violate the system” (Naysmith, 2012, p. 2). However, Reason contended the following:

For those who pick over the bones of other people’s disasters, it often seems

incredible that these warnings and human failures, seemingly so obvious in

retrospect, should have gone unnoticed at the time. Being blessed with both un-

involvement and hindsight, it is a great temptation for retrospective observers to

slip into a censorious frame of mind and to wonder at how these people could

have been so blind, stupid, arrogant, ignorant, or reckless. (Reason, 1990/2003, p.

214)

Reason (2000, p. 768) later characterized the overemphasis on remedial training as

“retraining, naming, blaming, and shaming,” or, more simply put, blame-retrain.

While not explicitly prohibited, the overuse of remedial training falls short of

FDA expectations of robust corrective actions. Specifically, instead of acting haphazardly on a suspected root cause, the FDA expects pharmaceutical QC laboratories to investigate all possible causes, and “each contributing factor [is] considered when corrective action is implemented” (FDA, 2005, para. 6). With respect to human error, for example, recent FDA Warning Letters include the following statements:

 “Your firm prepared a corrective and preventive action (CAPA) stating …this deficiency was the result of a human error. We are concerned that your firm lacks documentation to support this conclusion …We are concerned with your inability to conduct a thorough evaluation …and identify problems that may

5 lead to subsequent or new incidents... It is your responsibility to determine the appropriate corrective actions that will reduce the possibility of future …problems” (FDA, 2011b, “Unit VI” para. 2, “Unit III” para. 5).

 “Your response, however, is inadequate because your firm has yet to …identify a root cause …your firm states …this failure was due to human error and your firm will retrain appropriate staff on this aspect of the procedure.” (FDA, 2011c, “CGMP Violations of Active Pharmaceutical Ingredients” section 1 para. 2, section 3c para. 1).

Rather than employing frequent retraining, the FDA requires that corrective actions align with the level of risk to product quality, that organizations evaluate of the effectiveness of actions taken, and that corrective actions result in improvements to and an enhanced understanding of product and process (21 CFR 211.192; FDA, 2006b; FDA,

2006c). This understanding of product and process improvement and enhancement accompanies the FDA guidance on process validation. While not specifically directed at pharmaceutical QC laboratories, this guidance directs pharmaceutical manufacturers to

“…maintain the process in a state of control over the life of the process, even as materials, equipment, production environment, personnel, and manufacturing procedures change” (FDA, 2011a, p. 5).

In concert with these statements, the FDA’s Guide to Inspections of Quality

Systems states “products cannot be ‘tested into compliance’ by arbitrarily labeling out-of- specification lab results as ‘laboratory errors’ without an investigation resulting in scientifically valid criteria” (FDA, 1999, section 13 para. 4). The FDA also requires that human operators have adequate “education, training, and experience…to perform the assigned functions” and that there be an “adequate number of qualified personnel” (21

CFR 211.25); therefore, “laboratory error should be relatively rare” (FDA, 2006a, p. 6).

6 By extension, investigations attributed to human error should also be relatively rare because “frequent errors suggest a problem that might be due to inadequate training of analysts, poorly maintained or improperly calibrated equipment, or careless work” (FDA,

2006a, p. 6). In other words, frequent remedial training should not be common in mature quality because failures should trigger corrective actions that drive product and process improvements. In support of this premise, the FDA’s July 2015 revision of their draft guidance on Quality Metrics acknowledged, “less robust quality systems often rely on preventing recurrence solely through personnel re-training (i.e., the same training has already been provided to the employee(s)) while more robust quality systems consider re- design and re-development of the process" (FDA, 2015, p. 12).

Attempting to prevent human error via remedial training is not unique to FDA- regulated industries. For example, Reason (2000, p. 768) states that the tendency to blame human influences “is the dominant tradition in medicine,” because “seeking as far as possible to uncouple a person’s …acts from any institutional responsibility is clearly in the interests of managers.” Kevin O'Donnell, Market Compliance Manager for the Irish

Medicines Board, states that while retraining is a key preventative measure, the overuse of remedial training is not justified scientifically, for human error is not the only causative factor (Poska, 2009, p. 54). The World Health Organization (WHO), however, permits retraining to prevent problem recurrence during vaccine manufacturing

(Chaloner-Larsson, Anderson, & Egan, 1997), but the Directorate-General for Health and

Food Safety of the European Union (2012, p. 4) requires justification of human error as a

7 root cause, “having taken care to ensure that process, procedural, or system-based errors have not been overlooked, if present.”

From a regulatory perspective, remedial training is an organizational response to the requirement to investigate errors. From a historical perspective, however, Ross discusses the concept of blame as Attribution Theory, which began in the early 1900s with the acknowledgement of biases in perception (1977, pp. 174-175). He expands the theory with his argument describing fundamental attribution errors, or the “general tendency to overestimate the importance of personal or dispositional factors” (p. 184).

Reason concurs, “The occurrence of a man-made disaster leads inevitably to a search for human culprits…[but] the apparent clarity of retrospection springs in part from the shortcomings of human cognition” (Reason, 1990/2003, pp. 215-216). Instead of evaluating “situational factors beyond [the front-line operator’s] control” (Reason,

1990/2003, p. 212), these fundamental attribution errors persist because investigators fail to understand other system influences such as “the impact of relevant environmental forces and constraints” (Ross, 1977, p. 184) or “faulty applications of [statistical] knowledge or information in making estimates, inferences, and predictions” (Ross, 1977, p. 201). In addition, “short-term pressures and the human capacity to rationalize…lead reasonable people to act in unreasonable ways” (Kotter, 1996, p. 74). Countering the overemphasis on remedial training, therefore, requires a different way of thinking via an understanding of the system and its influences on human behavior.

8 System Thinking and W. Edwards Deming

Understanding and preventing errors is critical to the long-term success of a business. With regard to human errors, however, organizations too often adopt the person-centered approach of remedial training even though human operators are not the

“main instigators [but] inheritors of system defects” due to poor design, conflicting goals, incorrect installation, faulty maintenance, defective organization, and bad management decisions (Reason, 1990/2003, p. 173; Reason, 1990b, p. 476). In pharmaceutical QC laboratories, for example, system defects include hiring analysts based on cost over skill, budgetary constraints that lead to poorly maintained instruments, and test acceptance criteria that are not statistically reasonable. In addition, Joseph Juran describes three categories of human error that result from the system:

 Technique errors arising from individuals lacking specific, needed skills;

 Errors aggravated by lack of feedback;

 Errors arising from the fact that humans cannot remain indefinitely in a state of complete, ready attention (Juran & Godfrey, 1999, pp. 3.43-3.44).

Other examples of system errors include training that lacks periodic assessment of effectiveness (World Health Organization, 2008), standard operating procedures with excess background or non-sequential order of steps (Chaloner-Larsson, Anderson, &

Egan, 1997, p. 12), restrictive organizational hierarchies or bureaucracies, and competing projects from executive-level management.

Instead of focusing on and punishing human imperfection, system thinking recognizes that fallible humans contribute to defects and loss of productivity but accepts

9 that human errors cannot always be eliminated (Reason, 2000; Puvanasvaran, Jamibollah,

& Norazlin, 2014). System thinking realizes that errors “emerge from a complex ... interaction between the technical and social aspects of the system” (Reason, 1990b, p.

476). Termed latent errors, these complex interactions result in unidentified, dormant defects in system design that significantly contribute to human shortcomings (Reason,

1990/2003, p. 173). When presented with human error, for example, system thinking identifies latent system errors and implements system-based countermeasures to “change the conditions under which humans work” (Reason, 2000, p. 768). Suggested avenues to process improvement include addressing process failures and their effects (Paciarotti,

Mazzuto, & D'Ettorre, 2014), reducing error probability via poka-yoke theory of error proofing (Juran & Godfrey, 1999; Puvanasvaran et al., 2014), and other well-documented quality methodologies such as Lean, Six Sigma, or Total Quality Management.

However, because of the complex of socio-technical systems, “more 'engineering fixes' [and] conventional remedies of human factors specialists” do not achieve problem resolution (Reason, 1990b, p. 476). Therefore, organizations must adopt a holistic perspective of problem diagnosis via systems thinking. For this purpose, this thesis studied the teachings of one influential proponent of system thinking: Dr. W. Edwards

Deming.

Dr. Deming was a theorist and consultant on statistical and quality principles and practices and was commonly referred to as the father of modern quality. Born in 1900,

Dr. Deming’s education consisted of a B.S. in engineering, a M.S. in physics and mathematics, and a Ph.D. in mathematical physics from Yale in 1928 (The W. Edwards

10 Deming Institute [Deming Institute], 2012). Dr. Deming’s early life consisted of work at the Bureau of Census, as an instructor at the USDA Graduate School and Stanford

University, and later as consultant to the Secretary of War (Deming Institute, 2012).

Because of his experience and association with the renowned physicist and statistician

Dr. Walter Shewhart, Dr. Deming traveled to Japan many times between 1946 and 1956 to conduct studies related to agriculture and teach courses in statistics to aid rebuilding efforts (ASQ, n.d. a; Deming Institute, 2012). Dr. Deming received many awards for his work and continued to teach and consult for many organizations and governments until his death in 1993 (Deming Institute, 2012).

In his teachings, Dr. Deming outlined the importance of understanding how all components interacted within a system. A core theme in his teachings was that business failings did not belong to the employee but to “a failure in the system… which is owned by management rather than the employee working in the system” (Naysmith, 2012, p. 1).

From a production perspective, for example, Deming described the interaction between suppliers, the production line, distribution, and consumers but also illustrated the influence of consumer research and process redesign (Deming, 1982/2000). However, he added that improvement to a system came not from within the established framework but via “new knowledge…[and] basic ground rules of knowledge of change” (Deming,

1994/2000, p. 1-2). In Out of the Crisis, his 14 Points focused on various problems and behaviors that, when addressed, would transform an organization (Deming, 1982/2000).

Five of Deming’s 14 points of particular interest to this thesis were as follows

(1982/2000, pp. 23-24):

11  Improve constantly and forever the system of production and service, to improve quality and productivity, and thus constantly decrease costs (5).

 Institute leadership …The aim of supervision should be to help people …to do a better job. Supervision of management is in need of overhaul, as well as supervision of production workers (7).

 Drive out fear, so that everyone may work effectively for the company (8).

 Remove barriers that rob the hourly worker, …people in management, and in engineering of their right to pride of workmanship (12).

 Institute a vigorous program of education and self-improvement (13).

In The New Economics, Deming distilled his 14 points into a theory called System of Profound Knowledge (SoPK). The “ideas and underlying principles [of the System of

Profound Knowledge] were first shaped by the economic downturn in the 1980s,” marked by a decrease in U.S. quality and increase in demand for foreign goods (Schultz,

2013). The purpose of this additional theory was to provide his “outside view” of common practices (1994/2000, pp. 92). For example, Deming challenged the concept of meeting customer expectations because “the customer expects only what you and your competitor have led him to expect” (1994/2000, p. 7). The intent of this “new knowledge” was to challenge common thinking because “knowledge necessary for improvement comes from outside … [as] a system cannot understand itself” (1994/2000, pp. 1, 92).

12 The four parts of Deming’s System of Profound Knowledge are (1994/2000):

 Appreciation for a System

 Knowledge about Variation

 Theory of Knowledge

 Psychology

While Knowledge about Variation and Theory of Knowledge provide valuable insights, this thesis focused on two components of the SoPK theory: Appreciation for a System and Psychology.

Statement of the Problem

While training is mandated by FDA regulations (21 CFR 211.25), training in itself is insufficient because human error is unavoidable (Reason, 1990/2003). While

significant opposition to remedial training exists in published literature, its practice is

nonetheless commonplace. Too many organizations abuse remedial training (Kozlowski,

2014) and fail to use human factor theory to adequately resolve issues related to human

errors (Cintron, 2015). The persistence of human error is due to complex and varied

processes that make assessing human factors difficult because of the overall lack of

publicly available data due to the confidential and sensitive nature of internal

investigations (Cintron, 2015), departments with conflicting knowledge and goals, and a

culture of correction rather than prevention. Other contributing factors include the desire

to quickly close investigations by blaming results on laboratory error (Kuwahara, 2007),

the misunderstanding and limitations of the regulatory definition of error (see Reason,

13 1990/2003, p. 156-157), and for short expiry products, regulations that allow the destruction of investigations one year from the expiration of a batch (21 CFR 211.180).

In addition, and in spite of the arguments against remedial training, the FDA does not explicitly disallow the use of remedial training, and common RCA methods promote the inclusion of human error as a viable cause (e.g., “manpower” in 5Ms & E, and

“people” in 4 Ps). However, as stated previously, “less robust quality systems” rely too heavily on remedial training (FDA, 2015, p. 12), whereas the FDA encourages organizations to implement actions to “reduce the risk of human error” (FDA, 2008, para.

8). To promote this implementation, the FDA in 2015 announced its interest in “what percentage of …corrective actions involved re-training of personnel” (FDA, 2015, pp. 12,

7-8). While the FDA removed this metric from their subsequent draft (FDA, 2016), there were obvious benefits from not using remedial training, including improved staff morale and the ability to achieve long-term corrective actions. To accomplish this transition, this thesis introduced a novel approach to conducting investigations and root cause analysis via Dr. Deming’s System of Profound Knowledge.

Purpose of the Study

The published literature contains substantial theory on and tools for root cause analysis. When applied correctly, the literature claims that these tools can determine the root causes of problems. In many instances, however, organizations experience problem recurrence, especially with problems related to human error.

In lieu of a novel or modified RCA method, this thesis applied two elements within Deming’s System of Profound Knowledge—Appreciation for a System and

14 Psychology—to suggest improvements to the process used to conduct investigations and root cause analysis. While Deming did not provide explicit instructions on how to conduct root cause analysis, this thesis conducted a literature review based on

Appreciation for a System and Psychology. Based on the review, this thesis conducted a literature search to locate general tactics to improve the process of investigating failures, with the goal of countering the reliance on remedial training in pharmaceutical QC laboratories.

Theoretical Bases and Organization

W. Edwards Deming’s System of Profound Knowledge, found in The New

Economics (Deming, 1994/2000), provided the theoretical basis for this thesis. More specifically, this thesis evaluated two components of the SoPK theory: Appreciation for a

System and Psychology. Additional theories that supported these components follow.

Appreciation for a System

James Reason, noted expert on human error, contributed to the theoretical basis of system thinking. In particular, Reason’s discussion of latent errors and system defenses offered strong support for system theory. in Thinking in Systems by

(2008) provided an additional understanding of system thinking and system behavior.

Psychology

In discussing Deming’s SoPK, Naysmith commented, “one of the weaknesses in

Deming's systems thinking is the fleshy, unpredictable element of any business: the human factor. I believe Deming's concepts can be complemented or enhanced with

Reason's research into ‘human error’” (Naysmith, 2012, p. 1). In particular, Reason’s

15 thorough description and analysis of human error contributed to Deming’s concept of psychology (Reason, 1990/2003; Reason, 1990b; Reason, 1995; Reason, 2000).

Limitation of the Study

This thesis considered only those investigations conducted by pharmaceutical QC laboratories that tested drug products pursuant to FDA regulations. While the changes discussed herein might apply to other regulatory bodies, each pharmaceutical QC laboratory must evaluate their specific regulatory environment to determine the requirements for root cause and corrective action. In addition, with regard to remedial training and RCA, this study did not evaluate or recommend a specific RCA technique that would satisfy the proposed discontinuation of remedial training. However, the reader must become familiar with the various RCA techniques especially those that challenged the current emphasis on remedial training.

Lastly, this study examined only those aspects related to two elements within

Deming’s System of Profound Knowledge: Appreciation for a System and Psychology.

Deming’s work on Knowledge of Variation and Theory of Knowledge were beyond the scope of this thesis. The reader might find considerable discussion on these topics in the published literature. In addition, while they are meaningful topics, this thesis did not address those actions outside of an organization’s sphere of influence (Covey, 1989) or individual improvement methodologies that addressed personal responsibility, goal setting, or esteem. The reader might reference the published literature for methodologies of other quality practitioners in addition to other managerial and psychological theories related to this topic.

16

Definition of Terms

Aim: In a system, its aim is a targeted condition that offers a “broad description of a

condition” of a future state (Rother, 2009, pp. 43 & 49) and an expectation of the

system’s “performance over time” (Meadows, 2008, p. 88). An aim, however, cannot be

confined to mission statements or slogans. Instead, all actions taken on a system affect

its aim, even if the actions are unintentional and the resulting aim is undesirable.

Analytical Thinking: The historical approach to understand problems: “A three-step

process… take it apart …understand the behavior of each part taken separately, and then

they try to aggregate the understanding of the parts into an understanding of the whole”

(ACASA, 1992/2011, pp. 11). While useful for small-scale experiments, analytical

thinking fails to understand all system influences and behaviors (e.g., synergy).

Blame-Retrain: A colloquial, pejorative term for any attempt to prevent problem

recurrences via accusatory, remedial training of personnel. Typically, the training

involves rereading established procedures without consideration given to the content of

the procedure or quality of the training.

Boundary: In a system, the largest system over which an organization or actor within an

organization has access to and can influence (ACASA, 1992/2011). Boundaries can

expand or contract if the influence changes.

Corrective Action: Action taken on a problem’s root cause to prevent its recurrence.

Flow: In a system, this is the positive or negative source of change for a stock.

Fundamental Attribution Error: In failure investigations, the “general tendency to

overestimate the importance of personal or dispositional factors” (Ross, 1977, p. 184).

17 Hierarchy: The ordered “arrangement of systems and [its interconnected] subsystems

…[that] can largely take care of themselves …and yet serve the needs of the larger system” (Meadows, 2008, p. 82).

Human Error: A “generic term to encompass all those occasions in which a planned sequence of mental or physical activities fails to achieve its intended outcome, and when these failures cannot be attributed to the intervention of some chance agency” (Reason,

1990/2003, p. 9).

Idealized Redesign: “Employees at every level redesigning their part of the operation and making sure that the different designs complement each other, thus making available the benefit of their expertise and experience as well as their commitment” (Roth, 2013, p.

27). Idealized redesign is confined by system boundaries, but its actions can change system aims.

Latent Error: Unidentified, dormant defects in system design that significantly contribute to front-line errors (Reason, 1990/2003). Generally, these errors are difficult to detect because their effects are not realized until long after their commission.

Leverage Points: “…places in the system where a small change could lead to a large shift in behavior” (Meadows, 2008, p. 145).

Mistake: Errors, which result from “deficiencies or failures in the judgmental and/or inferential processes involved in the selection of an objective or in the specification of the means to achieve it, irrespective of whether or not the actions directed by this decision- scheme run according to plan” (Reason, 1990/2003, p. 9).

18 Reengineering: A process that focuses “on two things: making major improvements in existing processes and promoting business innovation.” (Can be compared to Idealized

Redesign.) (Roth, 2013, p. 27).

Resilience: The “measure of a system’s ability to survive and persist within a variable environment …[and] even after a large perturbation” (Meadows, 2008, p. 76).

Root Cause: Fundamental contributing factors of a problem that “can be reasonably identified and that management must control” (Vinnem et al., 2010, p. 1142).

Self-organization: “…the capacity of a system to make its own structure more complex”

(Meadows, 2008, p. 79). Typically, agents within a system affect this change, such as human participants improving internal processes for the benefit of the entire system.

Slips and Lapses: “Errors which result from some failure in the execution and/or storage stage of an action sequence, regardless of whether or not the plan which guided them was adequate to achieve its objective” (Reason, 1990/2003, p. 9).

SoPK: Deming’s System of Profound Knowledge.

Stock: The current quantity or state or the tangible or intangible “elements of the system that you can see, feel, count, or measure at any given time” (Meadows, 2008, pp. 17-18).

Synthesis: “…design the system as a whole and then derive the property of the parts from the properties of the whole …[because] the only way that we can think creatively about a system is to assume it was destroyed last night” (Can be compared to Idealized

Redesign.) (Ackoff Center for Advancement of Systems Approaches [ACASA],

1992/2011, p. 14-15). Compare to Analytical Thinking.

19 CHAPTER 2

REVIEW OF THE LITERATURE

Rather than being the main instigators …operators tend to be the inheritors of system defects …Their part is usually that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking James Reason

Introduction

This literature review aimed to disprove the general use of remedial training as a root cause and corrective action. In addition, the literature review sought guidance on implementing Deming’s System of Profound Knowledge in pharmaceutical QC laboratories. Unfortunately, the peer-reviewed literature contained no explicit arguments in favor of remedial training and no current research that evaluated implementing

Deming’s system in pharmaceutical QC laboratories. This lack of research was likely due to the specificity of the search, the “variety of causal elements [related to] quality program implementation” (Ono, 2013, Abstract), and Deming’s focus on “major cultural change …[instead of] working within a system familiar to managers” (Evans & Lindsay,

2008/2014, p. 61).

The literature review, therefore, sought to describe the two elements of interest to this thesis, namely appreciation for a system and psychology. This thesis conducted searches in the literature for topics such as the definition of quality, Deming management methods, Deming’s System of Profound Knowledge, root cause analysis, corrective

20 actions, human error, human error reduction, and regulatory approaches. This search used Boolean search terms to narrow the results obtained via the CSUDH University

Library Web Portal, Google Scholar, and the World Wide Web, using the Google search engine. The literature review also evaluated Deming’s system, using printed books from recognized authorities on system thinking and human error.

Brief History of Quality Management

Ono (2013, p. 1) contended, “The history of quality management improvement initiatives spans several decades as a response to the perceived threat …made by

Japanese industries.” While “Deming …arguably had the most impact on Japanese high- quality manufacturing and business than any other individual” (Baris, 2015, p. 5), the evolution of quality management began long before Deming and the Japanese industries.

A brief discussion of major milestones in the history of quality management follows.

During the era of mercantilism, early artisans controlled quality via organized unions called guilds (ASQ, n.d. d). In contrast to these skilled specialists, Adam Smith proclaimed, “The greatest improvements in the productive powers of labour …seem to have been the effects of the division of labour” (Smith, 1776, I.1.1). The father of modern-day laissez faire economics, Smith contended that by dividing labor by task productivity would increase, the unskilled would be employable, and the resultant profits

“would be distributed to help all of society” (Roth, 2013, p. 25). However, while a pleasant thought, this altruistic goal failed due to the following:

The man whose whole life is spent in performing a few simple operations, …has

no occasion to exert his understanding…[and] he naturally loses, therefore, the

21 habit of such exertion, and generally becomes as stupid and ignorant as it is

possible for a human creature to become. (Smith, 1776, V.1.178)

Therefore, unless the goal was an utterly mindless workforce, improvement required a more complete understanding of quality management.

After Smith, the theory of Mechanism emphasized the division of labor and argued that employees “should be programmed to function as parts of a ‘well-oiled machine’ with no need to think, fueled only by their pay” (Roth, 2013, p. 25). Aligned with Mechanism, Frederick Taylor “introduced the scientific method (observe, measure, repeat) …to break processes down into individual tasks, [and] then for discovering the most efficient way to complete each task” (Roth, 2013, pp. 25-26). Whereas Adam Smith sought to make unskilled persons employable, Taylor emphasized decreasing costs by

“getting more productivity out of fewer workers” (Roth, 2013, p. 26). When workers resisted Taylor’s methods, he implemented incentivized quota systems to maintain productivity, which was successful until top management began to raise quotas (Roth,

2013).

In contrast to Taylor and Mechanism, Humanism contended that addressing employee needs and developing their potential would produce staff willing to take care of inefficiencies (Roth, 2013). Henry Ford and other adherents of the Humanistic ideology demonstrated that “treating workers as human beings rather than as mindless machine parts …did, indeed, generate increased profits, [but] most members of the business community refused to listen” (Roth, 2013, p. 26). During the 1930s and even through

World War II, Mechanism still heavily influenced quality management (Dahlgaard-Park,

22 2011) as the primary focus was “the control of variation based on the work of

Shewhart… [but] limited to the technical aspect of total quality” (Maguad & Krone,

2014, p. 23).

After World War II, “Deming …and other experts …transformed the quality concept …to a broader body of knowledge” (Maguad & Krone, 2014, p. 23). By combining mechanism and humanism (Roth, 2013), Deming “demonstrated a change in

…leadership …led to improved quality, productivity, and profits” (Becker & Glascoff,

2014, p. 50). In particular, Deming taught a “top-down approach,” coupled with measuring efficiency (Barouch & Kleinhans, 2015, p. 203), but emphasized training the employees to use quality tools effectively (Roth, 2013). However, while combining the ideas of Adam Smith, Frederick Taylor, humanism, and Walter Shewhart, Deming claimed increased productivity depends less on quality tools, and more on “finding ways to generate commitment and to release the potential of workers” (Roth, 2013, p. 27).

Not limited to releasing worker potential, the nascent school of Systems Thinking further combined Mechanism and Humanism via socio-technical . This theory argued that “human, social, organizational, [and] technical factors [should be] considered together [to achieve a] better understanding of how human, social and organizational factors affect the ways that work is done and technical systems are used”

(Baxter & Sommerville, 2011, p. 16). Ackoff stated, “…the systems revolution involves two things: It involves the concept of system, and it involves the use of science.

[Western management] had science but not the concept of a system” (ACASA,

1992/2011, p. 19). Systems theorists like Deming adopted this approach “because the

23 interactions between [parts in a system] are …even more important to bottom-line improvement than the parts themselves” (Roth, 2013, p. 27).

Since the advent of systems thinking, there have been substantial advances to quality management, especially as it extended beyond the manufacturing sector into

“service, healthcare, education, and government” (ASQ, n.d. b). Examples of these quality methods include Motorola’s Six Sigma, and the Toyota Production System (TPS), also known as Lean Manufacturing (ASQ, n.d. c). The published literature contains numerous applications of these methods, and this thesis does not challenge their utility and benefit. Instead of these more popular methods, this thesis has sought to improve root cause analysis via the theoretical framework of Deming’s System of Profound

Knowledge, because it “remains to date the strongest theoretical basis to [the quality management] discipline” (Barouch & Kleinhans, 2015, p. 204).

System of Profound Knowledge

Definition and Interpretation of Quality

As systems theory evolved, so did the various interpretations and “different emphases on how best [quality] could be implemented” (Maguad & Krone, 2014, p. 50).

With respect to the definition of the term ‘quality,’ for example, several definitions exist:

 Crosby: “Conformance to clearly stated requirements,”

 ISO 9000:2000: “The degree to which a set of inherent characteristics fulfills requirements,”

 Deming: “Meeting and exceeding present and future customer needs and expectations,”

24  Feigenbaum: “The total composite product and service characteristics …through which the product and service in use will meet the expectations of the customer,” and

 Juran: “Fitness for use” (Moodliar, Genis, Anelich, & Puren, 2014, p. 26).

Hoyer and Hoyer (2001, p. 54), grouped these definitions into two categories, referred to as quality levels:

 Level one is a simply producing goods or delivering services with measurable characteristics that adhere to a fixed set of specifications that typically are defined numerically.

 Apart from their independent characteristics that are measurable, quality products and services at level two are simply made up of quality products and services that satisfy customer’s expectations related to their use or consumption.

Within these categories, Crosby and the ISO 9000 definition speaks to level one that

focuses on conformance quality, Deming and Feigenbaum are level two, focusing on

design quality, and Juran is considered a balance of the two levels, which defines quality

as fitness for use and free of deficiencies (Korakianiti & Rekkas, 2011).

In contrast to grouping into levels, Maguad & Krone (2014, p. 50) segmented

quality definitions according to the approach. Specifically, they suggested there is an

“artificial divide [between] two competing …strategies, namely ‘the culture change

route’ (often attributed to Deming) versus the ‘project approach’ (often attributed to

Juran)” (Maguad & Krone, 2014, p. 50). However, Maguad & Krone (2014, p. 50)

argued against overly segmenting quality theories, for this polarization tends to be used

“by consultants who want to differentiate their products from others [but] will only lead

client organizations to perilous grounds.” In addition, there appears to be one common

theme: meeting needs, and “the principles and practices of [quality management] seem to

25 have a universal scope” (Barouch & Kleinhans, 2015, p. 204). These needs include meeting internal metrics, employee needs, and customer expectations. Therefore, regardless of and independent of the chosen quality definition, organizations must be mindful of all applicable needs, present and future, and work toward meeting and exceeding them.

Short-term Profits, Performance and Thinking

While defining quality and needs is conceptually simple, meeting needs with quality goods and services is a difficult task. The cause is as follows:

We live in a complex world in which many institutions have difficulty sustaining

a meaningful and coherent existence over an extended period. Some

organizations seem to function well for a while but falter as competitive and

economic pressures expose vulnerabilities. (Schultz, 2013, p. 20)

In addition to complex local and global markets, a successful system of management

“must be adapted to the organizational culture and specificity” (Barouch & Kleinhans,

2015, p. 204). However,

Some individuals believe organizations have become too large and leaders are not

in touch with public needs. Disgruntled interest groups demand results, while the

disenfranchised raise questions about fair and ethical conduct, and wonder what

can be done. They cling to the hope that big ideas accompanied by swagger and

bravado will get them to a promised land. (Schultz, 2013, p. 20)

In search of this promised land, many people read through Out of the Crisis, by

Deming, thinking they might uncover a formula for quality achievement (Hoyer &

26 Hoyer, 2001). During this exodus, Deming saw that companies often were searching for

a “magic stick” to obtain immediate results (Barouch & Kleinhans, 2015, p. 205).

However, Deming did not provide such a formula, and the fabled “magic stick” did not

exist. Instead, he argued, “Quality is multidimensional [and] virtually impossible to

define …in terms of a single characteristic or agent” (Hoyer & Hoyer, 2001, p. 55). For

example, in Out of the Crisis, Deming warned against short-term profits and performance

(1982/2000, pp. 97-99, 105) because “return on investment is usually delayed and not

always measurable” (Barouch & Kleinhans, 2015, p. 205). In addition, profitability

occurs only when organizations fully implement quality management principles (Barouch

& Kleinhans, 2015). In support of his theories, Deming “clearly demonstrated what

seems counterintuitive; higher quality from improved processes results in lower costs and

improved profitability” (Becker and Glascoff, 2014, p. 53).

Notwithstanding the evidence supporting a long-term perspective and in spite of

Deming’s admonition in Out of the Crisis, the focus on managing by short-term results

remained. In support of Deming’s perspective, Kotter explained:

For most of [the 20th] century, as we created thousands and thousands of large

organizations …[but] we didn’t have enough good managers to keep all those

bureaucracies functioning. So many companies and universities developed

management programs, and hundreds and thousands of people were encouraged

to learn management on the job. And they did. But people were taught little

about leadership. To some degree, management was emphasized because it’s

easier to teach than leadership…Unfortunately for us today, this emphasis on

27 management has often been institutionalized in corporate cultures that discourage

employees from learning how to lead. (Kotter, 1996, p. 27)

In addition to short-term perspectives and the lack of leadership, another failure is the incorrect application of analysis and analytical thinking. Ackoff describes analytical thinking as a method dating back to the Renaissance as “a three-step process…take it apart …understand the behavior of each part taken separately, and…try to aggregate the understanding of the parts into an understanding of the whole” (ACASA, 1992/2011, p.

11). For example, if an automobile starts to act erratically, a repair shop attempts to isolate the failed part and then replace it. In addition to automobile repair garages, this style of management is common to the Western World with attempts to manage organizational units “as well as possible” (ACASA, 1992/2011, p. 2). For example, if a particular group or individual in a QC laboratory does not follow a procedure, management isolates the faulty behavior and then retrains the accused. In general,

Ackoff argues that humanity follows this general pattern, by taking “corporations and schools apart into departments or disciplines, try to run each one, and then aggregate them into a whole” (ACASA, 1992/2011, p. 11). However, while this approach may work for short-term issues and simple systems, Ackoff argues the following:

We discovered …a fundamental shortcoming in that way of thinking. Since the

system is a whole that cannot be divided into independent parts, when you do

divide it up… you lose all of its essential properties. You cannot explain the

behavior of a system by analysis. You can reveal its structure and say how it

28 works, but you can’t say why it works the way it does… because explanation

never lies inside of a system, it lies outside. (ACASA, 1992/2011, pp. 11-12)

In support of Ackoff’s perspective, Deming states, “A system cannot understand itself

…you can learn all about ice and know very little about water” (ACASA, 1992/2011, p.

12). In addition, Meadows (2008, p. 12) cites a Sufi teaching story that teaches, “You think that because you understand ‘one’ then you must therefore understand ‘two’ because one and one make two. But, you forget that you must also understand ‘and.’”.

This is because it is “possible to improve the performance of each part taken separately and destroy the system at the same time…” (ACASA, 1992/2011, p. 2).

Long-Term Perspective, Reengineering, and Synthesis

While it is easy to blame an organization or its leadership for this failure in thinking, Deming acknowledges, “They’ve been taught that way, they’re educated that way. They don’t know anything else” (ACASA, 1992/2011, p. 17). Therefore, a long- term perspective “basically requires a change in thinking” (Barouch & Kleinhans, 2015, p. 206). This is true because “…it is not the people, but rather the prevailing management system within which we work that is a culprit …and there is a growing consensus that a new approach is needed” (Rother, 2009, p. xiv). This change in perspective is not a new approach.

In 1990, Dr. Michael Hammer discussed the concept as reengineering, whose focus is “on two things: making major improvements in existing processes and promoting business innovation” (Roth, 2013, p. 27). To this Roth (2013) added, “Reengineering starts at the top of the organization with an assessment of mission, strategic goals, and

29 customer needs and then works downward, focusing on entire business processes” (p.

27). However, Roth (2013) contends that reengineering focuses only “on the key processes …[and] increasing efficiencies, no matter what the cost to employees” (p. 27).

Therefore, because reengineering does not acknowledge “the organization as a whole …it loses the interactions between the processes [and] the weakness remains, just on a

different level” (Roth, 2013, p. 27).

In contrast to reengineering, systems theorists introduced the concept of idealized redesign, defined as a change, which “involves employees at every level …making sure

that the different designs complement each other, thus making available the benefit of

their expertise and experience as well as their commitment” (Roth, 2013, p. 28). Simply

put, “in idealized re-design, you design the system as a whole and then derive the

property of the parts from the properties of the whole …[because] the only way that we

can think creatively about a system is to assume it was destroyed last night” (Ackoff

Center for Advancement of Systems Approaches [ACASA], 1992/2011, p. 14-15).

Ackoff described this as synthesis and contended it was a remedy to analysis and

“constitutes a major revolution” (ACASA, 1992/2011, pp. 1, 12-13). Ackoff’s

comparison between analysis and synthesis appears in Table 1.

30 Table 1

Comparison of Analysis and Synthesis

Step Analysis Synthesis (The Traditional Approach) (Systems Thinking Approach) 1 “take the thing that you want to “take the thing you want to understand apart” understand as a part of a larger whole” 2 “explain the behavior of each part “explain the behavior of the taken separately” containing whole” 3 “aggregate your explanation of the “dis-aggregate the understanding of parts into an understanding of the the whole into an understanding of whole” the parts” Note. Adapted from ACASA, 1992/2011, pp. 12-13.

Ackoff stated that the purpose of thinking via synthesis was to “explain by identifying the role or function of parts in a system in the larger system” (ACASA,

1992/2011, pp. 13). This was necessary because of the following:

Each part of the system can affect the behavior of the whole, but no part has an

independent effect on the whole… the performance of the whole is never the sum

of the performance of the parts taken separately, but it’s a product of their

interactions. And therefore, the basic managerial idea introduced by systems

thinking is that to manage a system effectively you must focus on the interactions

of the parts rather than their behavior taken separately. (ACASA, 1992/2011, p. 1)

While it is tempting, isolating and removing analytical thinking from practice is not sufficient because “getting rid of what you don’t want is not any assurance you’re gonna get what we do want” (ACASA, 1992/2011, p. 16). This is important because “the role of design in quality [is] the zero stage – the most important one – the one that should

31 have the greatest effort and cost the most” (ACASA, 1992/2011, p. 16). In particular,

Ackoff notes, “We have to redesign most of our products and not merely improve the

quality of the existing product” (ACASA, 1992/2011, p. 18). This thought aligns with

Deming’s teaching on innovation and his discussion of carburetors, in which he asks the

following:

Where today are the makers of carburetors? …How could an automobile run

without a carburetor? The makers of carburetors improved their product year by

year. Customers were happy, loyal. What happened? Innovation. Came the fuel

injector, which does the job of a carburetor, and a lot more. (Deming, 1994/2000,

p. 9)

In summary, Deming comments, “The moral is that it is necessary to innovate, to

predict needs of the customer, give him more” (Deming, 1994/2000, p. 9-10). Therefore,

merely improving products and services is not sufficient. Understanding the components

of the system via “the reductionist dissection of regular science” (Meadows, 2008, p. 83)

is a critical first step, but not identifying interconnections, functions, and purpose is a

recipe for failure (Meadows, 2008, pp. 83-84). This is because, as Deming states, “you

may reduce defects to zero [but still] go out of business” (ACASA, 1992/2011, p. 18).

From a QC laboratory perspective, for example, the primary focus of investigations is

often closing the investigation and resuming normal activities. However, while the

opportunity presents itself, the organization only passingly considers long-term corrective

actions. Because of this limited perspective, the organization reflexively applies

32 corrections that do not last (i.e., blame-retrain), and they never develop or implement the

‘fuel injectors’ so valuable for future success.

Deming’s System

As described in Chapter 1, Deming has presented a theory that summarizes his teachings on systems thinking. The purpose of Deming’s system is to describe an

“outside view” to combat the fact that “a system can not understand itself” (Deming,

1994/2000, pp. 92-93). This System of Profound Knowledge contains four elements

(Deming, 1994/2000, p. 93):

 Appreciation for a system.

 Knowledge about variation.

 Theory of knowledge.

 Psychology.

While “little of Deming’s system of Profound Knowledge is original …[his] major contribution was to tie these concepts together in the context of business” (Evans &

Lindsay, 2008/2014, p. 60). In particular, Parry, Mate, Perla, and Provost (2013, p. 1902) commented, “Deming’s theory is that improvement will occur through the simultaneous application of methods underpinned by [SoPK] and subject-matter expertise.” In management, for example, Deming’s SoPK would prompt managers to realize that because their own viewpoint is limited, they must solicit feedback from their employees, their customers, and those outside their industry to learn and grow. With respect to this,

Schultz (2013, p. 21) commented:

33 The system of profound knowledge is a theory of related principles that requires a

leader or manager to consider all organizational aspects when making decisions.

This means recognizing how processes are interconnected and how they function

as a whole within the larger environment so the organization can reach intended

expectations. (2013, p. 21)

Deming’s SoPK is not without its criticisms. While Singh, Dean, and Chee-

Chuong (2013) offer overall praise for Deming’s management method, they argue that

“claims of universal applicability are better directed at the ‘macro’ or organization- industry country level” (p. 65). In contrast, however, “at the ‘micro’ or individual level

…the theory permeates itself differently in organizations, ...[and] positional authority and tenure length are important in terms of acceptance of some key elements of the model”

(Singh et al., 2013, p. 65). Because of this, they contend that acceptance rates may vary

“in organizations located in some Western countries where organizations have become less hierarchical and have flatter structures,” and troubles may arise when comparing

“Western organizations [which] are more egalitarian than similar organizations in Eastern countries and emerging economies” (Singh et al., 2013, p. 65).

Marshall, Pronovost, and Dixon-Woods (2013) state that “improvement science needs a genuine partnership between academics and front-line practitioners” (p. 420). In support of this partnership, they acknowledge the need for collaboration between the

“skepticism, scientific rigour, and methodological technical expertise” of researchers,” and the “content knowledge, a thorough understanding of working contexts, and practical wisdom” practitioners (Marshall et al., 2013, p. 420). However, these researchers

34 challenge that Deming’s SoPK is insufficient because “this conceptualization of a science of improvement is, by itself, too narrow and restrictive to address the challenges that face the health sector, and too often under-emphasizes robust assessment” (Marshall et al.,

2013, p. 419). To remedy this, they stress the following:

Partnerships between researchers and practitioners and between different

disciplines should be authentic relationships between equals, and no group should

be seen as subordinate or as the servant of the other. These partnerships should

entail mutual support and healthy challenge, rather than jealous guarding of

territorial domains. (Marshall et al., 2013, p. 420)

Parry, Mate, Perla, and Provost (2013, p. 1902) responded to the aforementioned

assessment by Marshall, Pronovost, and Dixon-Woods and acknowledged their

comments as “welcome and timely.” However, they argued that the four parts of

Deming’s SoPK “do not feel narrow and restrictive …[but] cover a range of scientific

disciplines, and many will recognize the connections between them and the disciplines

brought together under health services research” (2013, p. 1902). Nevertheless, while

they recognized that a number of improvement methods have come from Deming’s work,

they agreed that SoPK needs an expansion to encompass more scientific disciplines

(2013, p.1902).

Barouch and Kleinhans (2015) discussed the general criticisms related to

Deming’s SoPK and other quality management (QM) ideologies. For example, they

outlined the following arguments (pp. 202, 204, 209, 211):

 “The lack of flexibility of QM reduces organizational innovation.”

35  “QM new paradigms (systemic, pragmatic) create difficulties in understanding and implementing this discipline, as they contradict mainstream managerial approaches grounded upon Newtonian paradigms.”

 “The profitability of QM is not demonstrated.”

 “Participation [in QM] is a lure or is very controlled [as] employees can participate only in a predefined and constrained framework.”

In response to these arguments, however, Barouch & Kleinhans (2015) argued that apparent ambiguities and failed implementation were related to a lack of understanding of quality principles and timelines. They also noted a failure to admit that companies that were award winners outperformed the comparative companies that were not award winners on main financial indices. This included top management who only reluctantly participated in quality-focused programs (Barouch & Kleinhans, 2015).

Overall, the quality movement spurned by Deming’s system “made a valuable contribution, not least in challenging overly technocratic, managerialist, or regulatory- focused approaches to change” (Marshall, Pronovost, & Dixon-Woods, 2013). As discussed previously, these other approaches included the Smith’s division of labor, mechanism, and Taylor’s scientific method. In addition, Deming’s SoPK also was a good fit with the models introduced by Peter Drucker (Pearson, 1999). For example,

Petit (2015) evaluated Deming’s SoPK with respect to collaboration between competing executive MBA schools, and argued that MBA programs, “have no choice but to collaborate more frequently in order to expand the parameters of the sector …[or they will] be left fighting for a mature finite market of prospective students” (Petit, 2015, p.

63).

36 Like MBA schools, therefore, success in QC pharmaceutical laboratories and other business environments depends on improved coordination, cooperation, and communication between departments, especially when considering the finite market of

prospective customers, employees, and resources. To that end, Deming has offered his

System of Profound Knowledge as “a high-level complement to subject matter expertise

in the pursuit of improvement” (Bennet & Provost, 2015, Abstract) and a powerful design

for using “modern information technology” more effectively via the design for a

perpetual system for building and refining knowledge about complex systems (Pearson,

1999, p. 34).

Aligned with the idea of Deming’s perpetual system, this thesis sought a novel

approach for conducting investigations and root cause analysis via the System of

Profound Knowledge. To accomplish this goal, this thesis focused on two elements only:

Appreciation for a system and Psychology. However, Deming warned, “the various

segments of the system of profound knowledge …can not be separated” (Deming,

1994/2000, p. 93). To this Schultz (2013) added, “As a catalyst for leadership …these

four elements cannot be separated and applied individually. All elements interact with

one another to create a comprehensive strategy for leading others and managing

individual behavior” (p. 22). This is true because “the performance of the whole [system

is] a product of [the] interactions [of its components]” (ACASA, 1992/2011, p. 1).

Notwithstanding these admonitions, this thesis chose to focus on only two

elements in order to emphasize the prominence of these elements in investigations.

However, the intent of this emphasis was not to encourage segmentation of SoPK via

37 analytical thinking. Instead, this thesis sought to encourage synthesis via Deming’s perpetual system for building and refining knowledge (Pearson, 1999). For this purpose,

this thesis sought to demonstrate how Appreciation for a System and Psychology can

help pharmaceutical QC laboratories:

1. Implement Ackoff’s idealized re-design (ACASA, 1992/2011, p. 15)

2. Establish a genuine partnership between management and the workforce (see Marshall et al., 2013), and

3. Encompass more scientific disciplines (Parry et al., 2013) outside of QC and the regulatory affairs typical of pharmaceutical quality assurance

With this aim, this thesis conducted a more thorough review of the extant literature to

better understand Deming’s Appreciation for a System and Psychology.

Appreciation for a System

Schultz (2013) described Deming’s appreciation for a system as “the ability to

understand the relationship among system components - suppliers, producers and

customers - and how they contribute to the overall good of the organization, its

stakeholders and adjoining environment” (p. 21). Deming first introduced this concept of

relationships in Out of the Crisis via his depiction of production viewed as a system

(Deming, 1982/2000, p. 4). In his design of production as a system, Deming

demonstrated that all aspects of production eventually linked to each other, as the output

of distributed products produced information via consumer research, which then

“provides a feedback loop for continual improvement of product or service, and continual

learning” (Deming, 1994/2000, p. 59). However, as tempting as it was to implement

computerized information systems in order to understand system linkages, Deming

38 warned that new machinery and the latest gadgets were not the answer (Deming,

1982/2000, pp. 12-13). In addition, instead of “people charging this way and that way”

(Deming, 1982/2000, p. 12-13), organizations must first learn to understand systems before they could appreciate them.

Definition and Characteristics of System

By definition, a system is a network of interdependent and interconnected components that are organized and work together for the aim of the system (Deming,

1994/2000, pp. 50, 95; Meadows, 2008, p. 11). As Lagrosen and Travis (2015) stated,

“A common example of a well-managed system is an orchestra where each player supports each other creating a larger whole – a system, which is more than the collection of the parts.” To explain system structure, Meadows conceptualized systems as bathtubs filled with water (Meadows, 2008, p. 19). The elements of a bathtub include the tub itself, a faucet and drain, and water in the bathtub. With water in the tub, the current water level is termed its stock, which Meadows defined as follows:

The foundation of any system …the elements of the system that you can see, feel,

count, or measure at any given time [but] a stock does not have to be physical.

[For example,] good will toward others …[and] supply of hope …are both stocks.

(2008, pp. 17-18)

Stocks refer to the current quantity or state of an element of the system. These can be physical inventories, equipment or instruments, employees and management, or less tangible concepts like level of training, experience, and trustworthiness. However, a stock is only a portion of the system, because in a bathtub the faucet and drain act as

39 sources of change for the stock (i.e., the water). Meadows explained this concept as follows:

Stocks change over time through the actions of a flow. Flows are filling and

draining, births and deaths, purchases and sales, growth and decay, deposits and

withdrawals, successes and failures. A stock, then, is the present memory of the

history of changing flows within the system. (2008, p. 18)

In pharmaceutical QC laboratories, for example, a stock can be the level of highly skilled and error-free employees. In terms of staffing, therefore, inflow includes hiring and training efforts, whereas the outflow consists of departing employees via retirement, terminations, and other sources of attrition. In the case of promotions or lateral transfers, however, the total stock may not change in spite of local changes. Therefore, before making generalizations regarding , it is important to understand the boundaries of a system.

System Boundaries

When describing a system, it is important to consider its relevant boundaries.

Deming stated, “The boundary of the system may be drawn around a single company, or around an industry, or as in Japan in 1950, the whole country” (1994/2000, p. 55). In pharmaceutical QC laboratories, for example, a traditional system may include all personnel, instruments, and material within the laboratory.

There is logic in limiting system boundaries. For example, Deming advised, “The bigger be the coverage, the bigger be the possible benefits, but the more difficult to manage” (1994/2000, p. 55). Ackoff adds that the system in question is “always the

40 largest system over which you have control, you have access to. There is no point in

designing a system that you can’t affect” (ACASA, 1992/2011, p. 20). Meadows also

comments on system boundaries, stating that systems analysts often …make boundaries

too large …[with] diagrams that cover several pages…This “my model is bigger than you

model” game results in enormously complicated analyses (Meadows, 2008, p. 98).

With limited boundaries in mind, the stocks and flows of interest are those over

which the organization has control. For QC pharmaceutical laboratories, this may

include local policies and procedures, and personnel and perhaps materials, instruments,

and equipment. However, because a parent organization or external entity may influence

some of these areas, the laboratory may need to modify their system boundaries. For

example, while employee staffing and training directly affects the laboratory, human

resource policy may govern training activities, union guidelines may moderate the

successful in favor of the tenured, and corporate policy may encourage turnover via

interdepartmental transfers.

While limiting boundaries holds merit, organizations should not simply constrain

systems boundaries without cause. Because “systems can be nested within systems”

(Meadows, 2008, p. 16), a QC laboratory system can exist within a larger, parent system

focused on customer service, and this latter system can address the aforementioned issues

with human resource policies, union guidelines, and turnover. In addition, “the right

boundary for thinking about a problem rarely coincides with the boundary of an academic

discipline” (Meadows, 2008, pp. 98, 183). Cooperation between disciplines is required,

but this must be more than the typical act of “putting together people from different

41 disciplines and letting them talk past each other” (Meadows, 2008, p. 183). Instead, the educated must “admit ignorance and be willing to be taught, by each other and by the system” (Meadows, 2008, p. 183).

The boundaries of nested systems that combine different disciplines are common in QC pharmaceutical laboratories. For example, in addition to the traditional QC fields, other supporting staff may include validations, quality assurance, finance, and human resource departments. A more may include sister facilities and corporate staff, and an even larger system may include local governments, regulatory inspectors, and national lawmakers. Even competitors can be within the boundaries of a system, as

“efforts by competitors, acting jointly or together, aimed at expanding the market and to meet needs not yet served, contribute to optimization for all of [involved]” (Deming,

1994/2000, p. 56). Lastly, a system boundary can also be time-bound, but caution is needed when considering time, because of the following:

In a strict systems sense, there is no long-term, short-term distinction.

Phenomena at different time-scales are nested within each other. Actions taken

now have some immediate effects and some that radiate out for decades to

come…You need to be watching both the short and the long term – the whole

system. (Meadows, 2008, pp. 182-183)

Nevertheless, as discussed previously, caution is necessary to understand the system over which the organization has influence because “there is no point in designing a system that you can’t affect” (ACASA, 1992/2011, p. 20). Therefore, in order to avoid

42 “enormously complicated analyses” (Meadows, 2008, p. 98), good judgment is necessary to define and then increase system boundaries (Deming, 1994/2000, p. 29).

In his discussion with Deming, Ackoff (ACASA, 2011, p. 21) shared an example of boundary selection by a low-level management team at Kodak. In the example, the team first implemented improvements at one particular location. After a successful implementation, the group made incrementally larger proposals and improvements at other departments and then finally organized a joint venture with IBM. At the conclusion of the story, Ackoff added, “there’s a successive enlargement of the system and its design, incorporating larger systems over which they had no control, but which they could influence by the power of the ideas” (ACASA, 2011, p. 21).

In summary, there are appropriate limits to system boundaries as well as inappropriate ones. For example, a pharmaceutical QC laboratory cannot influence corporate mergers or significantly influence customer demand for pharmaceuticals.

Instead, the organization can establish system boundaries that circumscribe appropriate activities (e.g., documentation errors) and then expand the boundaries as necessary to influence needed change (e.g., provide evidence that documentation errors are related to an analytical test method’s formatting and grammar). In support of this conclusion,

Meadows admonishes:

Ideally, we would have the mental flexibility to find the appropriate boundary for

thinking about each new problem. We are rarely that flexible …[However,] it’s a

great art to remember that boundaries are of our own making, and that they can

43 and should be reconsidered for each new discussion, problem, or purpose.

(Meadows, 2008, pp. 98-99)

Aim of the System

While tangible system elements and boundaries are important to understand system behavior, these are “often the easiest parts to notice” (Meadows, 2008, p. 12). In addition to these tangible aspects, “a system must consist of three kinds of things:

elements, interconnections, and a function or purpose (Meadows, 2008, pp. 11). In

pharmaceutical QC laboratories, for example, the system consists of its tangible elements

(e.g., staff, facilities, instruments, materials, boundaries), their interconnections (e.g.,

reporting structure, internal procedures, interpersonal relationships), and function or

purpose (e.g., perform tests and provide results). Within the boundaries of the system,

organizations can adequately identify the system elements and interconnections. Next,

the system’s function or purpose—Its aim—merits further discussion.

As Deming stated, “A system must have an aim. Without an aim, there is no

system” (1994/2000, pp. 50, 94-95). In natural biological systems, for example, the aim

is typically self-preservation and replication or reproduction. However, all systems, be

they natural or man-made, “can change, adapt, respond to events, seek goals, mend

injuries, and attend to their own survival in lifelike ways, although they may contain or

consist of nonliving things” (Meadows, 2008, p. 12).

In man-made systems (i.e., business organizations), the aim of the system is

traditionally thought to be the unified sum of strategic goals. Such organizations may

express these goals via their mission, vision, or strategy statements and can include

44 claims to fulfill the needs of a targeted group, become a market leader, or generate profitable returns for shareholders. In traditional reporting structures, executives evaluate the managers on their ability to meet stated goals because “keeping sub-purposes and overall system purposes in harmony is an essential function of successful systems”

(Meadows, 2008, p. 16). However, while the mission and vision statements frequent internal metrics, company hallways, and external webpages, the system aim can be difficult to see. As Meadows argued,

A system’s function or purpose is not necessarily spoken, written, or expressed

explicitly, except through the operation of the system. The best way to deduce a

system’s purpose is to watch for a while to see how the system behaves

…[because its aim is] deduced from [the] behavior [of the system], not from

rhetoric or stated goal. (Meadows, 2008, p. 14)

In spite of well-crafted mission and vision statements, “organizations [are] constantly evolving entities that adapt to on-going experience to optimize their ability to be successful” (Lagrosen & Travis, 2015, p. 565). Because of this constant evolution, however, organizations tend to overemphasize short-term outcomes and results. These include action-item lists (Rother, 2009, p. 32), uniform output quotas (ACASA,

1992/2011, p. 69), quarterly profits, and management by objectives (Deming, 1994/2000, pp. 24, 30, 33). As described in Deming’s Red Bead experiment, the foreman’s demands, strict work standards, and motivational “posters to help the Willing Workers” (Deming,

1994/2000, pp. 154-162) plainly illustrate the overemphasis on “results-oriented level of thinking” (Johnson, 2009, pp. viii-ix) that permeates so many organizations. In

45 pharmaceutical QC laboratories exhibiting this thinking, for example, analysts who report failures receive the reproach of remedial training, which ultimately incites fear and discourages honest disclosure (Deming, 1982/2000, pp. 264-268).

While most systems claim to be customer-focused, the resultant behavior focuses

on profits and results. Because of this, management information systems do not align

with system goals (Kotter, 1996, p. 111), and individual departments compete against

each other for recognition and funding, often at the expense of the supposed aim of the

organization (Deming, 1994/2000, p. 29; Meadows, 2008, p. 85). In all, this behavior is

due to the overall failure of “the strategic planning process, which still focuses much too

much on short-term financial information” (Kotter, 1996, p. 111), generating substantial

“losses whose magnitudes cannot be evaluated [and] cannot be measured” (Deming,

1994/2000, pp. 22, 24, 30, 33).

In defining what constitutes a system’s aim, therefore, a short-term result is not

the appropriate aim. Instead, an aim is a target condition that offers a “broad description

of a condition” of a future state (Rother, 2009, pp. 43, 49) and an expectation of the

system’s “performance over time” (Meadows, 2008, p. 88). By this definition, a short-

term result is but one measured performance parameter (i.e., an effect) that the target

condition produces (i.e., the cause). To understand cause and effect, and effectively

interpret system behavior and aim, organizations must “[place] events into historical

context…[and] strive to understand the connections between the…event, …the resulting

…behavior, and the…characteristics of the [system’s] structure” (Meadows, 2008, p. 88-

89).

46 While theoretically simple, organizing events via their historical context is difficult to execute. This is because traditional management styles focus on the

“planning, organizing, and controlling” of short-term results (Kotter, 1996, p. 126); whereas, system characteristics and complexities demand long-term respect (Meadows,

2008, p. 87). In contrast to management, therefore, focusing the system aim on a target condition requires the long-term vision associated with leadership (Kotter, 1996).

In support of the long-term aim of leadership, Rother contends, “The objective is

not to win, but to develop the capability of the organization to keep improving, adapting,

and satisfying dynamic customer requirements” (Rother, 2009, p. 10). In addition,

Deming urged, “The aim proposed here for any organization is for everybody to gain—

stockholders, employees, suppliers, customers, community, the environment—over the

long term” (1994/2000, p. 51). However, “the aim of leadership is not merely to find and

record failures…but to remove the causes of failure: to help people do a better job with

less effort” (Deming, 1982/2000, p. 248). In agreement, Meadows added, “Hierarchical

systems evolve from the bottom up. The purpose of the upper layers of the hierarchy is

to serve the purposes of the lower layers” (2008, p. 85). To achieve this, Meadows

admonished,

Hierarchy must balance the welfare, freedoms, and responsibilities of the

subsystems and total system – there must be enough central control to achieve

coordination toward the large-system goal, and enough autonomy to keep all

subsystems flourishing, functioning, and self-organizing. (Meadows, 2008, p. 85)

47 Therefore, the aim of the system—the target condition of the future state—must achieve this appropriate balance, control, and autonomy.

Achieving this seemingly idyllic aim is possible. One notable example is that of the success achieved by the Toyota Motor Corporation. As Rother describes, for example, Toyota’s aim is not to “[achieve] target conditions at any cost, [but] first determine where you want to go, and then [decide] how to get there within financial and other constraints” (2009, p. 52). Rother argues that this holds true because “an economic break-even point is a dependent variable, not an independent constraint that determines direction” (2009, p. 52). However, as Becker and Glascoff describe, an established aim is not immune to internal pressure, and organizations must maintain focus to maintain it:

Popular press has reported that recent problems experienced by Toyota may have

resulted because the organization changed its aim and therefore the measures

management used to guide decisions. After 50 years of building high-quality

automobiles using organizational and managerial practices that used process

measurements as espoused by Deming, it was reported that Toyota changed its

primary goal. The change reported was that Toyota aimed at becoming the largest

car company instead of the best car company historically accomplished by

continually improving quality from process improvements. (2014, p. 53)

An organization, therefore, must aim to drive the direction of the system, but it must be worthwhile because an “aim is clearly a matter of clarification of values, especially on the choice between possible options” (Deming, 1994/2000, p. 51).

48 Feedback Loops, Dynamic Equilibrium, and Delayed Effects

As discussed previously, a stock is a quantity of an element in a system, and a flow is the source of actions that change the stock. In the case of the bathtub, the water level is the stock, and the faucet and drain are sources of flows (i.e., flow of water). An understanding of stocks and flows is important to appreciate the system. For example,

“the presence of stocks allows inflows and outflows to be independent of each other and temporarily out of balance with each other” (Meadows, 2008, p. 24). As business environments are often in constant flux, a healthy stock protects the business from erratic flows. There are three issues, however, that complicate the bathtub analogy: feedback loops, dynamic equilibrium, and delayed effects.

Feedback loops complicate the otherwise simple bathtub analogy. Meadows describes feedback loops as “mechanisms that create …consistent behavior …[and] are formed when changes in a stock affects the flows into or out of that same stock” (2008, p.

25). In the bathtub analogy, an example of a feedback loop would be a mechanism that, when the water level drops below a certain level, turns on the faucet, stoppers the drain, or both. In a workplace environment, a feedback loop would be the link between management encouragement, worker morale, and worker output. More specifically, when management appropriately rewards good work, morale typically increases, leading to more good work and rewards for the employees. In contrast, if management negatively influences good work (i.e., through blame retrain), morale typically decreases, resulting in lower quality work and more negativity from management. However, “in

49 real systems feedback loops rarely come singly. They are linked together, often in fantastically complex patterns” (Meadows, 2008, p. 34). System thinking, therefore, relies on understanding the numerous positive and loops that extend effects through the system.

When coupled with feedback loops, the concept of dynamic equilibrium also complicates systems understanding. In the bathtub analogy, for example, increasing the stock is conceptually simple: turn on the flow (the faucet) and then turn it off at the appropriate level. In contrast, with both flows active (i.e., the drain unplugged and faucet on), the level of stock depends on the balance between the two flow rates. Specifically, the stock changes if the rates are not equal or is in dynamic equilibrium if the rates are equal (Meadows, 2008, pp. 20-22). In the latter case, organizations may incorrectly assume the system is in balance and can tolerate change. For example, in the aforementioned relationship between management encouragement and worker output, a system in dynamic equilibrium may appear to be in balance. However, taking action on complex systems may disrupt dynamic equilibrium, especially if the stocks, flows, and feedback loops are misunderstood. Specifically, if management worker output is stable, it may reduce the ‘stock’ of management encouragement. However, as encouragement is a flow that supports morale, reducing it can disrupt equilibrium, leading to decreased morale. This, of course, can trigger a loss of worker output and the eventual reprimands from management. Therefore, organizations must be aware of stocks, flows, and feedback loops that sustain dynamic equilibrium.

50 The tendency is to underestimate the influence of feedback loops and dynamic equilibrium because “[we] focus more easily on stocks than on flows” (Meadows, 2008, p. 22). In addition, “we are too fascinated by …events [and] pay little attention to

…history” (Meadows, 2008, p. 90), which can lead the human mind to struggle with the consequences of delayed effects (Deming, 1994/2000, p. 63).

In the short-term, most systems can “survive the buffeting of the world, and, within limits, regaining their composure and proceeding on about their business”

(Meadows, 2008, p. 75). However, “the benefits of a fundamental solution may not show up for a long time” (Deming, 1994/2000, p. 63). Therefore, an understanding of the

“dynamics of stocks and flows – their behaviors over time” is critical to success in complex systems in the long term (Meadows, 2008, p. 19).

In addition to a short-term perspective, there is a tendency to focus on inflows rather than on outflows when evaluating a system’s behavior (Meadows. 2008, p. 22).

With regard to a bathtub, this would mean focusing on the faucet rather than evaluating water loss via a defective drain plug. In comparison, there may be an emphasis on the quality of new hires obtained via recruiting over preventing employee turnover in a pharmaceutical QC laboratory. Coupled with this, there may be numerous flows that are active all once, and multiple people acting within the system, which may “add up to the ebbs and flows, successes and problems, of all sorts of systems” (Meadows, 2008, p. 25).

Lastly, the volume of stock may significantly dwarf the volumes produced by the flows. Therefore, any action taken to correct one flow (e.g., increase hiring of skilled workers) may not produce the expected result due to competing flows (e.g., high turnover

51 due to low morale). In other words, while the flow of water through the faucet or drain can change abruptly, “it is much more difficult to change the level of water—the stock— quickly …because flows take time” (Meadows, 2008, p. 23). By extension, therefore, the rate of water via the in- or outflow determines the time required to change a stock. To fill a bathtub quickly, a very large faucet can provide the requisite flow. Along that same mindset, if an organization requires a stock of highly skilled and error-free employees, the time to change the stock depends on the rate at which the organization can hire new, qualified employees. In contrast, “to sack people lowers costs straightaway, but in due time may cause serious consequences” (Deming, 1994/2000, p. 63). This distinction is

“key to understanding why systems behave as they do …[because] stocks, especially large ones, respond to change, even sudden change, only by gradual filling or emptying”

(Meadows, 2008, p. 23).

Resilience, Self-Organization, and Hierarchy

Feedback loops, dynamic equilibrium, and delayed effects establish the foundation for basic system behavior. Without these elements, the system structure is incomplete and likely just a collection of cause-and-effect relationships. However, these three elements are merely the basis for system behavior. Three additional concepts are necessary to describe why systems work so well: resilience, self-organization, and hierarchy (Meadows, 2008).

The term resilience is the “measure of a system’s ability to survive and persist within a variable environment …[and] even after a large perturbation” (Meadows, 2008, p. 76). Applied to a bathtub, for example, resilience would refer to the faucet’s ability to

52 maintain and restore the water level in spite of an open drain or a bathing child splashing.

With respect to QC laboratories, a resilient training system would restore proficiency and expertise after a large exodus of the staff (e.g., layoffs, retirement). Meadows warned, however, “Resilience is not the same thing as being static or constant over time” (2008, p.

77). Instead, she admonished, “Resilient systems can be very dynamic …[with] short- term oscillations, or periodic outbreaks, or long cycles of succession, climax, and collapse may in fact be the normal condition, which resilience acts to restore” (Meadows,

2008, p. 77).

The second concept is self-organization, which Meadows defined as “the capacity of a system to make its own structure more complex” (Meadows, 2008, p. 79). Meadows added that self-organization is a product of higher-level resilience and results from

“feedback loops that can learn, create, design, and evolve ever more complex restorative structures” (Meadows, 2008, p. 76). However, according to Adam Smith’s division of labor and the Mechanism theory, “Like resilience, self-organization is often sacrificed for purposes of short-term productivity and stability …[which] are the usual excuses for turning creative human beings into mechanical adjuncts to production processes …or

…treat people as if they were only numbers” (Meadows, 2008, p. 79).

This perspective, of course, aligns with Deming who argued, “the aim of leadership should be to improve the performance of man …not merely find and record the failures of men” (Deming, 1982/2000, p. 248).

The last concept is hierarchy, or the “arrangement of systems and [their interconnected] subsystems …[that] can largely take care of themselves …and yet serve

53 the needs of the larger system” (Meadows, 2008, p. 82). The following items define some of the characteristics of a well-designed hierarchy (Meadows, 2008, pp. 82-85):

 Relationships within a subsystem are stronger and denser than those that are between subsystems;

 Each system itself can be partially decomposed such that the subsystems are able to function, at least partially, within its own right;

 Hierarchies can graduate from the lowest level and from small portions to their entirety;

 The subsystems’ goals align with the goals of the total system, and vice versa; and

 There is enough control centrally to coordinate all of the subsystems but can be autonomous in order “to keep all subsystems flourishing, functioning, and self-organizing.”

Hierarchies do not form spontaneously, nor is their interaction explained via the

“reductionist dissection” of analysis (Meadows, 2008, pp. 83-84). Instead, their complex interaction evolves from self-organization and the promise of increased resilience.

Whereas “complex systems can evolve from simple systems only if there are stable intermediate forms” (Meadows, 2008, p. 83), the goals of the system and its subsystems must align in order to form a stable hierarchy. If the subsystems dominate, selfish competition between departments produces sub-optimization (Deming, 1994/2000, p. 82-

87; Meadows, 2008, p. 85). If the total system dominates, the resultant behavior devolves into managing by strict numerical goals (Deming, 1994/2000, p. 31), which only promotes system traps like policy resistance or rule beating (Meadows, 2008, pp.

115, 136).

54 In summary, in addition to all the important stocks, flows, and feedback loops, dynamic systems also consist of certain elements—like resilience, self-organization, and hierarchy—by which organizations can better monitor and influence system behavior

(Meadows, 2008, pp. 25, 39-40, 85). Understanding these additional elements is important because “a flow can’t react instantly” to an input (Meadows, 2008, p. 39). In addition, because of the delays in receiving and processing system behavior, actions taken in a system only influence future behavior, not the past (Meadows, 2008, p. 39).

Therefore, instead of “looking for who’s to blame” (Meadows, 2008, p. 34), organizations should evaluate all critical system elements in order to maintain system sustainability over the long-term (Meadows, 2008, p. 85).

Holes in the System and Swiss Cheese

Like all models, the bathtub as a system is merely a “simplification of the real world” (Meadows, 2008, p. 22). Because “most troubles and most possibilities for improvement add up to proportions, something like …94% belong to the system (the responsibility of management)” (Deming, 1994/2000, p. 33), additional models are necessary to understanding how the system structure affects results. For this reason, a second model helps illustrate the complexities surrounding system flows.

This second model, termed the “dynamics of accident causation model”

(Naysmith, 2012), focuses on the active and latent factors that influence the failures of complex systems (Reason, 1990b). In this model, Reason visualizes a system as a stack of successive layers, or “planes lying one behind the other in an ordered sequence”

(1990b, p. 479). The planes consist of the “essential, benign components” (1990/2003, p.

55 199-200) of the structure and defenses of an organization against failure. The components include the quality of upper leadership, middle managers, the workforce, equipment and materials, and policies and procedures. The composition of each plane varies based on the specific organization, but overall the model should follow the previous discussion of system boundaries.

However, in contrast to solid layers, there are holes in each layer in Reason’s model that represent weakness or gaps in system defenses against failure. For a pharmaceutical QC laboratory, this may include uniformed decisions from upper management, loss of middle management or support staff due to turnover, frontline workforce inexperience, aging equipment, or poorly implemented policies and procedures. Colloquially termed the Swiss Cheese theory, failures do not occur unless there exists a path through the layers as “the holes …momentarily line up to permit a trajectory of …opportunity” (Reason, 2000, p. 769).

Instead of a static model, however, Reason adds, “Each of these planes has windows of [failure] opportunity, but they are in continual flux due to the largely unpredictable influences of both intrinsic and extrinsic factors” (1990/2003, p. 209). In other words, the planes and their holes change in position and size over time due to organizational and personal awareness of failure such as corporate culture, leadership and management ability, local support, task complexity, and other defensive barriers (Reason,

1995, p. 83).

Overall, therefore, failures occur when local variable events (e.g., unsafe acts, human error) combine with changing but latent weaknesses due to fallible organizational

56 decisions and processes (Reason, 1990b, p. 479; Reason, 1995, p. 83). Naysmith (2012)

describes this model as…

a diagram …that illustrates all the failure points in the system that produced the

undesirable outcome. A perfect storm of a problem, the failure points were

precisely aligned in layer upon layer …creating an aperture through which a

bullet could pass without resistance, hitting the failure target. (Naysmith, 2012,

p. 1)

In the case of failures investigations, for example, instead of focusing solely on front-line

failures, “when an adverse event occurs, the important issue is not who blundered, but

how and why the defenses failed” (Reason, 2000, p. 768). In addition,

“the greatest dangers stem not so much from the breakdown of a major component or

from isolated operator errors [but] from the insidious accumulation of delayed-action

human failures occurring primarily within the organizational and managerial sectors”

(Reason, 1990b, p. 476).

This accumulation of latent failures in an organization acts as a stock in a system.

In addition, because “stocks usually change slowly [and] can act as delays, lags, buffers,

ballast, and sources of momentum in a system” (Meadows, 2008, p. 23), these latent

failures act as sources of momentum in an organization. Therefore, with respect to

human error in pharmaceutical QC laboratories in particular, the organization cannot

merely alter this momentum with remedial training. Instead, the organization must focus

on specific system influences on human error, such as hiring policies, training

57 effectiveness and periodicity, and employee morale, thereby demonstrating an overall awareness of the system.

Management of a System

Instead of focusing on system influences and demonstrating an awareness of the system, organizations tend to “accuse the operators of a lack of discipline [even when] the operators are doing their best” (Rother, 2009, p. 13). These accusations and other

“decisions based on the measurement of tangible variables” fail to take into account all inputs—the flows—that produced the system (Becker & Glascoff, 2014, p. 60). While understanding that all variables are important, managing solely by these tangible variables and their immediate results tends to “only make things worse” (ACASA, 2011, p. 69) because of the failure to appreciate the system (Becker & Glascoff, 2014, p. 53).

In addition, using “measures of mistakes, defects, or problems to guide [organizational] action” inhibits continuous improvement (Becker & Glascoff, 2014, p. 60). Therefore, because mistakes occur, the goal should not be to cast blame. In particular,

Even in the best-run organizations, a significant number of influential decisions

will subsequently prove to be mistaken. Fallible decisions are an inevitable part

of the design and management process. The issue is not so much how to prevent

them, but how to ensure that their adverse consequences are detected and

recovered. (Reason, 1990b, p. 480)

Recalling the Swiss Cheese model, latent hazards exist in all organizations, and it is not until “a poor-quality act is done by an individual [that] the slices of Swiss cheese

58 …align” to produce the failure (Naysmith, 2012, p. 2). Therefore, “the problems lie in the system—for which management is responsible” (Rother, 2009, p. 13).

In contrast to measuring failures, successfully managing a system requires an appreciation of the system, especially because “the more removed [management is from] front-line activities …the greater [its] potential danger to the system” (Reason,

1990/2003, p. 174). In particular, sustained improvements “come from continuous attention to and measurement of processes” (Becker & Glascoff, 2014, p. 60). This holds true because of the increase in “complex, tightly-coupled …and highly defended systems

[that] have become increasingly opaque to the people who manage, maintain, and operate them” (Reason, 1990/2003, p. 179). However, the measurement must evaluate the entire system, not a limited aspect of it, lest an organization misunderstands and mischaracterizes the problem at hand (ACASA, 2011). This holds true because accurate characterization and precise measurements of all systems components are not possible.

Therefore, organizations should seek to “measure imprecisely the right thing [rather] than

…measure precisely the wrong one” (ACASA, 2011, p. 36).

Identifying and measuring the “right thing” requires an appreciation of the system. As discussed previously, this is essentially the purpose of a system aim. In particular, “The aim is a value judgment…[and] it is management’s job to direct the efforts of all components toward the aim of the system” (Deming 1994/2000, p. 50).

However, organizations cannot merely direct a system away from an undesirable state or attribute (i.e., human error) because moving away from these does not create value

(Deming 1994/2000, p. 51), nor will it generate the adaptation and continuous

59 improvement necessary for “long-term survival in highly competitive markets” (ACASA,

2011, p. 15-16; Rother, 2009, p. 166). Instead, because “quality is a product, not a method” (ACASA, 2011, p. 29), management must focus on the “the approach, the means, we can utilize for dealing with the unclear path to a new desired condition, not what the content and steps of our actions—the solutions—will be” (Rother, 2009, p. 8).

Managing by means is a critical departure from the Western management style in which the means are subordinate to the results, objectives, numerical goals, and share of market (Deming, 1994/2000, pp. 30-34, 56; Johnson, 2009, pp. viii-ix; Rother, 2009, p.

127). Accomplishing this different style requires that a manager identify leverage points, defined as “places in the system where a small change could lead to a large shift in behavior” (Meadows, 2008, p. 145). In contrast to managing by results, however, managing by means via leverage points requires an appreciation of the system. This is because “leverage points are points of power” (Meadows, 2008, p. 145), but “well- intentioned [intervenors often] pull the [policy] lever in the wrong direction” (Meadows,

2008, pp. 56-57). As demonstrated in the funnel experiment summarized by Deming

(1982/2000, p. 327; 1994/2000, p. 190), intervenors make adjustments based on the reactions of others, instead of “improving the process …to create win-win results for all involved parties” (Becker & Glascoff, 2014, pp. 56-57). Therefore, “management of a system …requires knowledge of the interrelationships …within the system [with the aim of] cooperation between components toward the aim of the organization” (Deming

1994/2000, p. 50).

60 Leadership of a System

Attempts to manage a system accomplishes nothing if the system does not have an aim because “the aim precedes the organizational system” (Deming 1994/2000, p. 52).

Management alone, however, also accomplishes nothing because consistent leadership is required to maintain and effectively communicate the aim (Deming 1994/2000, p. 50).

Furthermore, in contrast to Frederick Taylor’s era of scientific management, the world is

“vastly more complex and interconnected …[and] moves in chaotic and unpredictable ways …with increasing volume and velocity” (Weber, 2013, p. 13). Therefore, “…the challenge …is not to turn the heads of executives and managers toward implementing new production or management techniques” (Rother, 2009, p. 19) but to encourage and oblige “leadership to sponsor and energize the determination of the aim” (Deming

1994/2000, p. 52).

Leadership, however, “is not the same as authority,” nor does it exist solely in formally assigned roles (Weber, 2013, pp. 191, 193). Instead, just as “quality begins with the intent” (Deming, 1982/2000, p. 5), leadership is the result of those who lead by adopting roles and perform tasks that encourage others to follow (Weber, 2013, pp. 192-

193). In addition, true, humble leadership develops within a functioning team, in contrast to the egocentric “solo leader [who] receives [their] energy from hubris and conceit”

(Weber, 2013, pp. 193, 199) or the absence of true direction when “whoever is most persuasive wins and sets the direction for a while” (Rother, 2009, p. 43). In all, “team leadership is a demanding task …[and] requires getting people to face tough realities

…[and] make the tough changes required to adapt to those realities” (Weber, 2013, p.

61 198). In contrast to remedial training, therefore, leadership requires acknowledging and acting to improve system-wide practices, such as hiring, performance evaluations, compensation, promotion, and “behavior on the part of key players” (Kotter, 1996, pp.

97, 109-111).

Facing the realities of a system is not simple, for leaders must maintain the balance between short-term goals and the long-term aim. This entails moving “toward a new desired state through an unclear and unpredictable territory by being sensitive to and responding to actual conditions on the ground” (Rother, 2009, p. 9). True leadership, therefore, is “about orchestrating a process …that gets [everyone] with different views and agendas learning from each other” (Weber, 2013, p. 192). This process contrasts with the approach that demands “more discipline” from front-line staff (Rother, 2009, p.

163) or change by everyone except upper management (Weber, 2013, p. 199). In addition, and in contrast to sycophants who merely agree to avoid conflict (Weber, 2013, p. 200), true leadership requires a diversity of perspective that provides the “crucial reality check” needed for success (Cohen, 2005, p. 40).

In general, long-term success depends less on short-term results and more on the

“capability of [employees] to understand a situation and develop [long-term] solutions”

(Rother, 2009, p. 53). However, organizations often encourage adversarial, competitive conflict between individuals and departments (Deming, 1994/2000) rather than cooperation via joint objectives that generate productive conflict for the best of the system (ACASA, 2011; Deming, 1994/2000; Naysmith, 2012). In addition, and even when organizations task employees to achieve shared goals, these specialists often work

62 within a limited scope and timeframe (Rother, 2009). Therefore, even if one department operates at a loss for the benefit of others (Deming, 1994/2000, p. 53), leaders must not limit action to specialized staff or departments but should involve all in training for developing, and implementing improvement opportunities (Rother, 2009, p. 186). This is especially true in major change efforts, wherein an organization provides training, but it is incorrect, insufficient, or untimely (Kotter, 1996, 108), or not executed through skilled instructors (Rother, 2009, p. 189). After all, “if the learner hasn’t learned, the teacher hasn’t taught” (Rother, 2009, p. 191).

In summary, leadership of a system requires evaluating and directing the aim to accomplish the goals of the organization. Leadership stems not from authority, but from humility, teamwork, and facing the realities of system behavior. Of course, leadership must balance short- and long-term success, but it also must develop the capability of its employees to solve problems that ultimately produce success. Lastly, leadership cannot tolerate “the destructive effect of competition” (Deming 1994/2000, p. 50) but must

“utilize the human intellect of everyone in the organization” (Rother, 2009, p. 14).

However, “modern organizations are a recent invention” (Weber, 2013, p. 205), and “our human instincts and judgment are highly variable, subjective, and even irrational”

(Rother, 2009, p. 14). Therefore, leadership “has less to do with pushing leverage points than it does with strategically, profoundly, madly, letting go and dancing with the system” (Meadows, 2008, p. 165).

63 Psychology

With respect to human error in organizations, psychology includes “the many theories of organizational culture and behavior such as Maslow’s hierarchy of needs,

Herzberg’s two-factor theory, McGregor’s theory X and theory Y, Ouchi’s theory Z, and

Blanchard’s situational leadership model” (Ono, 2013, pp. 38). Calling on these theories,

Deming argued, “With an understanding of system structure and function, psychology helps organizations understand interactions between:

 People and circumstances

 Customer and supplier

 Teacher and pupil

 Manager and his people and any system of management” (Deming, 1994/2000, pp. 107-108).

Because of these interactions, Schultz described organizational psychology as

“The ability to recognize why people behave as they do, and create an environment …in which individual differences and skills are used to optimize the system” (Schultz, 2013, pp. 21-22). Because of the need to create a supportive environment, psychology includes

“Operating norms …that govern the behavior of members of the system …[and] reflect the organizational psychology of the system” (Bennet & Provost, 2015, p. 40). Operating norms can represent stocks of organizational resilience or latent weaknesses that increase the probability of downstream failure.

Psychology of Organizational Behavior

With regard to downstream failures, Reason argues, “Systems [failures] have their primary origins in fallible decisions made by designers and high-level (corporate or plant)

64 managerial decision makers” (Reason, 1990/2003, p. 203). In terms of the Swiss Cheese theory, these decisions constitute the first holes in system defenses. In support of his claim, Reason adds, “This is not a question of allocating blame, but simply a recognition of the fact that even in the best-run organizations, a significant number of influential decisions will subsequently prove to be mistaken” (Reason, 1990/2003, p. 203). In a QC pharmaceutical laboratory, for example, the overall goal is to provide accurate, compliant results at a competitive price. However, just like the “compatible goals [of] production and safety, …[with] finite resources there are likely to …short-term conflicts of interest

(Reason, 1990/2003, p. 203).

With regard to mistaken decisions and conflicts of interest, Weber states, “...teams and organizations are plagued by behaviors that are as bad for people as they are for business… we suffer from a chronic mismatch between the good we intend to achieve and the ways we work together to achieve it” (2013, p. 1). According to Reason

(1990/2003, p. 203), this is because:

 “Resources directed at improving productivity have relatively certain outcomes; those aimed at enhancing safety [or quality] do not, at least in the short term.

 “The feedback generated by the pursuit of production goals is generally unambiguous, rapid, compelling and (when the news is good) highly reinforcing. That associated with the pursuit of safety [and quality] goals is largely negative and intermittent.

 “Decision makers do not always interpret feedback on either the production or the safety channels accurately. Defensive ‘filters’ may be interposed that both protect them from bad news and encourage extra-punitive reactions.”

In concert with the aforementioned points, Roth (2013, p. 29) adds that organizations and educators focus on the certainty of numerical results because they are

65 “much easier and faster to work with” than systems thinking. Numerical results have value in understanding and appreciating systems. However, modern society is “vastly more complex and interconnected” than during Frederick Taylor’s era of scientific management (Weber, 2013, p. 13). Because of this, numbers alone do not constitute leadership. For example, organizations cannot set standards “simply to maintain a level of process performance [because] a process will tend to erode no matter what” (Rother,

2009, p. 12). In addition, standards and other “variables …[only] become leverage points when they …[affect] critical parameters. [However, most] numbers are not worth the sweat put into them” (Meadows, 2008, p. 147-149). To this idea Rother (2009, p. 12) added, “this is not because of poor discipline by workers (as many of us may believe), but due to interaction effects and entropy.”

Psychology of Organizational and Group Leadership

To resolve the discrepancy between expected system aim (i.e., strategic goals) and organizational behavior, Deming argues that management must institute training, eliminate fear, remove barriers to pride of workmanship, and invest in education and self- improvement (Deming, 1982/2000, pp. 23-24). However, the point is not to micromanage or otherwise shift the burden to an intervenor but to restore the “capacity of the system to maintain itself” (Meadows, 2008, pp. 131-133). This is difficult for management to understand, especially since their primary responsibility is to address issues. Notwithstanding, if management only focuses on solutions, “the original problem

[soon] reappears, since nothing has been done to solve it at its root cause” (Meadows,

66 2008, p. 133). Therefore, organizations must “work in such a way as to restore or enhance the system’s own ability to solve its problems” (Meadows, 2008, p. 135).

Fundamentally, however, organizations experience a conflict when faced with how to intervene. They recognize, “treading water …means falling behind if competitors are improving” (Rother, 2009, p. 13). In addition, upper management understands that

“the native incompetence of any set of line managers could …cause good decisions to have bad effects” (Reason, 1990/2003, p. 205). Due to this factor, intervention is required because leadership cannot wait for their systems to restore themselves naturally.

However, inappropriate intervention creates an overly dependent workforce (Meadows,

2008, p. 133). Therefore, organizational leadership is required.

Leadership, however, is not authoritarian management focused on “short-term gain and self-serving, pocket-stuffing behavior” (Schultz, 2013). Typical dictatorial behaviors are sources of dissatisfaction, encourage employee disengagement, contribute to waste, and undermine motivation and creativity (Evans & Lindsey, 2008/2014, p. 105,

186). Characteristics that do not define leadership include:

 Does not instill fear or trigger self-defense mechanisms (Deming, 1994/2000, p. 121);

 Does not blame employees for suspected “operator carelessness or incompetence” because these “block the discovery of effective remedies” (Reason, 1990/2003, p. 204);

 Does not “unilaterally define what it means to be …effective, and then impose [these] definitions on their people, wondering later why their [employee] feedback never improves” (Weber, 2013, p. 72);

 Does not focus solely on extrinsic motivation “above that [which is] needed to maintain quality of life,” for this only drives employees to seek Pavlovian “rewards, compensation, or other factors ….for the performance of a task”

67 (Ono, 2013, pp. 38-39) and eventually leads to destroyed motivation (Deming, 1994/2000, pp. 108-109);

 Does not “leap ahead with too much faith in …planning and thereby fail to leave room for learning and adaptiveness” (Rother, 2009, p. 153);

 Does not fall into “benchmarking traps, …Pareto paralysis, [or] lengthy theoretical discussion or opinions” that limit timely decision-making (Rother, 2009, pp. 7, 124, 131);

 Does not rely on agility or flexibility to reach production goals, for these “autonomously bypassed problems are by their nature non-improving” and inhibit continuous improvement (Rother, 2009, pp. 97, 100).

 Ultimately, authoritarianism can disrupt the “competitive edge of an organization, [which] doesn’t lie so much in solutions internally…but in it’s ability of the organization to understand conditions and create fitting, smart solutions” (Rother, 2009, p. 6).

In contrast to authoritarianism, leadership creates a unity of purpose and in the direction people wish to pursue (International Organization for Standardization [ISO],

2015). Instead of dominating meetings and conversations, good leaders develop opportunities and show their people appreciation for their talent and ability to contribute to the organization and work with people to explore ideas and develop solutions (Schultz,

2013). True leadership consists of building a climate of trust, effectively resolving conflicts, and even learning to work with defiant team members (Cohen, 2005, pp. 41-

47). Leadership also increases employee satisfaction and delight and encourages active participation in company goals (Barouch & Kleinhans, 2015, p. 203; Evans & Lindsey,

2008/2014, p. 105, 186). In concert with these features, Schultz (2013, p. 23) summarizes that leadership consists of several characteristics as follows:

 Leadership skills are not an inherently intuitive [but] are based on competencies that can be taught and learned.

68  Like any other ability, proficiency in leadership grows through coaching by teachers and mentors and experiencing a variety of work settings.

 Behaviors required for leadership must be adaptive, flexible, and proficient enough to meet the needs of followers as they occur.

 Leadership is needed and can be found formally (officially selected) or informally (follower appointed) at every level of an organization.

 With enough ambition, mental discipline, and emotional maturity, anyone has the potential to lead.

 The principles and competencies for leadership that are described in Deming's system of profound knowledge can be transforming and increase any organization’s competitiveness and become a place where people are proud to work.

When true leadership is present, employees tend to “reciprocate beneficial treatment they receive with positive work-related behavior” (Trybou, De Caluwé,

Verleye, Gemmel, & Annemans, 2015, p. 3), which “improves the bottom line by taking better advantage of employee expertise and potential” (Roth, 2013, p. 27). This positive behavior is driven by intrinsic motivation, defined as “the will to perform that is driven by internally generated factors such as the desire to succeed, pleasure of doing a task well, or altruistic efforts to assist others” (Ono, 2013, p. 38). Leaders who understand intrinsic motivation acknowledge the competitive advantage and positive impact engaged employees have on customer perceptions (Kumar & Pansari, 2014; Panaggio, 2014;

Trybou et al., 2015). Wise organizations are mindful of how competitive pressures may encourage “blind allegiance and loyalty [which lead to] relaxed moral reasoning, [but] instead emphasize the importance of social responsibility and caring for all stakeholders”

(Chen, Chen, & Sheldon, 2016, p. 1093) and “not just a privileged few” (Schultz, 2013, p. 21).

69 Psychology of Error Detection and Perception

Notwithstanding the potential gains of effective organizational leadership, and

“no matter how well we come to understand [human error], …errors are …inevitable”

(Reason, 1990/2003, p. 148). In spite of their inevitability, humans rely on three modes of error detection:

1. Individual self-monitoring, which uses “feedback control via high-level attentional monitoring …to ensure that actions conform to current intentions, particularly when they demand a deviation from routine practice” (Reason, 1990/2003, pp. 156-157),

2. Cues from the external environment, which “block our onward progress …[via] forcing functions” (Reason, 1990/2003, p. 161) such as the poka-yoke theory (Puvanasvaran et al., 2014), and

3. Discovery by another person who provides a fresh perspective (Reason, 1990/2003, p. 165).

The drawback with the first mode is that attentional monitoring does not guarantee detection because it relies on “the availability of cues signaling the departure of action from current intention” (Reason, 1990/2003, pp. 156-157). This is common for read-and-acknowledge training, for example, as staff may not recall all changes to a routine practice. In contrast, cues from the external environment may be superior to internal monitoring. However, they are only useful if used to successfully prevent error commission, for “backtracking from an [external cue] creates additional opportunities for deviation and can lead to total confusion on the part of the fault finder” (Reason,

1990/2003, p. 161). In support of this factor, Ernst Mach argues, “knowledge and error flow from the same mental sources, only success can tell the one from the other” (as cited in Reason, 1990, pp. 1). Lastly, aid from an external agent can also be useful, but only

70 when their “diagnostic hypothesis” is an accurate portrayal of the “true state of affairs”

(Reason, 1990/2003, p. 165). In other words, Deming’s “outside view” (1994/2000, pp.

92) only works when the outsider is not biased.

Related to the topic of outsider bias, accurate error detection is difficult because of the limitations of human perception. Specifically,

Human perception crafts well-designed, “mental models …[with] usually …a

strong congruence with the world …However, and conversely, our models fall far

short of representing the world fully …We often draw illogical conclusions from

accurate assumptions, or logical conclusions from inaccurate assumptions.

(Meadows, 2008, pp. 86-87)

Because accurate perception is difficult, correctly identifying the causes of error is doubly so. In particular, individuals and organizations fascinated by outcomes suffer from shortsighted misperceptions and, therefore, lack the ability to identify the true causes of failure (Meadows, 2008, pp. 90-91).

There are two primary causes of inaccurate perception. First, mental limitations constrain understanding to “only a few variables at one time” (Meadows, 2008, pp. 105-

106; Reason, 1990/2003, pp. 43, 96). Because of these constraints of information and mental capacities, individuals employ bounded rationality, or conclusions based on incomplete mental models they use to make seemingly reasonable but shortsighted decisions (Simon, 1955; Meadows, 2008; Reason, 1990/2003). Coupled with bounded rationality, individuals often establish mental boundaries around their models in order to limit the scope of their analysis. These boundaries justify the resultant decisions in the

71 absence of factual data and support the decisions against opposing perspectives, but also create bias against other perspectives. Two examples of bias are as follows:

1. Confirmation bias (Weber, 2013, p. 130) or belief polarization, which Cordelia Fine (2006) summarized: “Evidence that fits with our beliefs is quickly waved through the mental border control… Counterevidence, on the other hand, must submit to close interrogation… This phenomenon, called belief polarization, may help to explain why attempting to disillusion people of their perverse misconceptions is often futile” (Weber, 2013, pp. 152-153).

2. Hindsight bias, which causes retrospective judges to overestimate the causes of past events. In discussing this bias, Reason (1900/2003) stated, “Outcome knowledge dominates our perception of the past, yet we remain largely unaware of its influence… Each participant’s view of the future would have been bounded by local concerns. Instead of one grand convergent narrative, there would have been a multitude of individual stories running on in parallel towards the expected attainment of various distinct and personal goals” (p. 215).

Bounded rationality and the resultant biases are common to investigations in pharmaceutical QC laboratories. Specifically, due to limited information and mental constraints, those who investigate failures ultimately operate from a position of blind ignorance (see Reason, 1990/2003, p. 163). This ignorance contributes to the investigation’s framework and influences the investigation’s process. For example, investigators are typically not present during the failure. Therefore, they gather objective data, if available, and use interviews to supplement any missing or incomplete evidence.

However, this approach has flaws because of the assumption that the investigation can supply all relevant data, even though some facts are not measurable (e.g., effect of company culture) and other aspects are unknowable (see Deming’s Theory of Knowledge in Deming, 1994/2000, pp. 101-107). Of course, this is not limited to pharmaceutical QC laboratories, for example, as the FDA permits incomplete information in process

72 validation (i.e., “FDA does not generally expect manufacturers to develop and test the process until it fails” (FDA, 2011a, p. 9).

The second cause of inaccurate perception relates to Ross’s (1977) description of a fundamental attribution error. Even if presented with complete information, Ross argues that observant psychologists “readily infer” and make “hasty conclusions” about human influences in failures (1977, p. 184). This approach limits understanding due to

“the narrow sliver of things on which they each focus” (Weber, 2013, p. 125). As typically seen in pharmaceutical QC laboratories, failure investigations too quickly focus on proximal human involvement. In response, the investigation concludes with retraining as the necessary corrective action. This approach admits the failure is a situational surprise, or “localized events requiring the solution of specific problems” (Lanir, 1986, as cited in Reason, 1990/2003, p. 213). Weber defines this approach as managing a routine problem, “not because it happens regularly, but because we have a routine for dealing with it” (2013, p. 182).

In contrast to inferences and hasty conclusions, Reason cites the phrase fundamental surprise (Lanir, 1986, as cited in Reason, 1990/2003, p. 213), which Reason expounds as an event, which “reveals a profound discrepancy between one’s perception of the world and the reality [for which] a major reprisal is demanded” (Reason,

1990/2003, p. 213). In pharmaceutical QC laboratories, for example, investigations requiring systems thinking concern themselves with fundamental surprises. Instead of localized, isolated failures due to human involvement, the investigation seeks to evaluate latent, distal influences such as human resource policies, training quality and frequency,

73 and organizational culture and leadership. Instead of applying routine solutions, this approach first admits the failure may be an adaptive challenge, defined as “a problem

[with] no proven routines for dealing with the issue” (Weber, 2013, p. 182).

In summary, understanding the psychology of error requires an accurate perception, but bounded rationality, biases, and attribution errors inhibit this accuracy.

These factors contribute to “a fatally flawed set of assumptions,” which embolden people with a “moral imperative to convince others” to embrace their point of view (Weber,

2013, pp. 46, 131). Because of this, the “a natural human tendency” is to consider events as routine and situational even if they may be fundamental surprises which require adaptive responses (Reason, 1990/2003, p. 213; Weber, 2013, p. 183). Furthermore, while an organization may assume it is stable when faced with only routine issues,

“organizations …often face a number of adaptive hurdles [when] dealing with growth, change, competition, limited resources, and other welcome realities” (Weber, 2013, p.

183).

While training in psychology can help minimize flawed perceptions, nevertheless some tendencies will remain. This tendency, termed skilled incompetence, occurs “when our mindless reactions work against our intentions” (Argyris, 1990, as cited in Weber,

2013, p. 35). For example, misdiagnoses in accident investigations “tend to persist regardless of an accumulation of contradictory evidence” (Reason, 1990/2003, p. 165).

In support of this view, Weber argues, “Our brain …doesn’t prefer an informed view of reality; it prefers instead a biased, self-serving view that reinforces our current perspective, reassuring us that our way of seeing the world is right and true” (Weber,

74 2013, p. 63). Other limitations may exist concurrent with psychological ones, such as cultural barriers in a diverse organization or a misunderstanding of the distinction between common and special causes of variation (Deming, 1982/2000, pp. 309-370;

1994/2000, pp. 98-99, 207-226). Nevertheless, an “outside view” is necessary to see all issues (Deming, 1994/2000, p. 92), because, as Einstein stated, “problems cannot be solved at the same level of awareness that created them” (Rother, 2009, p. 1). Therefore, successful organizations must seek outside knowledge in order to anticipate, identify, and adapt to changing system conditions (Schultz, 2013).

Psychology of Error Commission

Inaccuracies are commonplace during the perception of error. Another common occurrence happens during the classification of human error. As argued in Chapter 1, robust quality systems do not rely on remedial training because they recognize humans cannot maintain complete attention for extended periods of time (Reason, 1990/2003, p.

180; Juran & Godfrey, 1999, pp. 3.43-3.44). Instead, robust systems accurately perceive errors and improve processes to address the root causes of human error.

As stated previously, the blame-retrain approach views ‘human error’ as related to

“forgetfulness, inattention, poor motivation, carelessness, negligence, and recklessness”

(Reason, 2000, p. 768). In contrast to this view, Reason argues that “cognitive performance and systematic errors are two sides of the same coin. …The resource limitations of the conscious ‘workspace’, while essential for focusing …upon particular aspects of the world, contribute to informational overload and data loss” (Reason,

1990/2003, p. 2).

75 Human error, therefore, is due to the limitations of human thinking and represents the “penalty …for our remarkable ability to model the regularities of the world and

…simplify complex information-handling tasks” (Reason, 1990/2003, p. 17).

However, Reason observes, “Errors take a limited number of forms” and include

mistakes, which are planning and problem-solving failures and slips and lapses, which

deal with observation and execution failures (Reason, 1990/2003, pp. 3, 8, 63). These

forms, of course, are limited to unintentional errors, not actions caused by employee

disengagement, negligence, defiance, or sabotage (Reason, 1990/20003, p. 195).

Because of the limited forms of unintentional error, therefore, effective theories must

understand “both the conditions under which an error will occur, and the particular form

that it will take” (Reason, 1990/2003, p. 4).

Instead of classifying of human error generally as the cause of problems, Reason

(1990/2003, pp. 53, 56) argues that unintentional human errors are due to:

 Knowledge-based errors;

 Rule-based errors; and

 Skill-based errors.

These three modes differ based on the “degree of preparedness that exists prior to

change” and whether the action precedes or follows detection of the error (Reason,

1990/2003, pp. 61, 63).

In general, persons working on novel or less-than familiar tasks operate at the

knowledge-based level (Reason, 1990/2003, p. 58). Lacking knowledge and

understanding, these individuals plan and execute tasks with a “computationally powerful

76 investment of conscious attention” (Reason, 1990/2003, p. 6). In contrast to merely paying attention, control of knowledge-based tasks is “primarily of the feedback kind

…using slow, sequential, laborious and resource-limited conscious processing …[that is] difficult to sustain for more than brief periods” (Reason, 1990/2003, pp. 50-51, 57).

While error prevention is possible, not all efforts succeed after “applicable problem- solving routines …contingency plans, or preprogrammed solutions…[have] been exhausted by the demands of a novel situation” (Reason, 1990/2003, pp. 57-58, 61, 166).

In addition, as discussed previously, humans cannot maintain complete attention for extended periods of time (Reason, 1990/2003, p. 180; Juran & Godfrey, 1999, pp. 3.43-

3.44). Therefore, errors are inevitable at the knowledge-based level.

Reason described knowledge-based errors as those occurring during task planning and stemming from many of the same issues as error perception. They include:

 Bounded rationality,

 Limited capacity to solve complex issues, and

 Incomplete knowledge due to confirmation bias, overconfidence, and oversimplification (1990/2003, pp. 38, 43, 55-74, 96).

A discussion of the first two points can be found page 69 of this thesis. The third point, incomplete knowledge, by itself is a misnomer. It suggests “forgetfulness [and] inattention” (Reason, 2000) as reasonable causes for human error, which supports the use of remedial training. However, this should never occur in pharmaceutical QC laboratories, for FDA regulations require that organizations understand the process and train their personnel effectively. Therefore, when system understanding exists and training is effective, personnel should not encounter knowledge-based errors.

77 In contrast to the intense focus required by knowledge-based tasks, rule-based tasks consist of resolving “familiar problems in which solutions are governed by stored rules…of the type if (state) then (diagnosis) or …then (remedial action)” (Reason,

1990/2003, p. 43). Personnel can anticipate negative events due to prior experience, and well-designed processes and contingencies guide the proper execution of the task

(Reason, 1990/2003, p. 61). Typically, humans evaluate problems at the rule-based level, and only “resort to the far more effortful [knowledge-based] level when available stored rules are inadequate” (Reason, 1990/2003, p. 65). Experts, therefore, are those who can retrain and recall more rules as well as understand them at a more abstract level (Reason,

1990/2003, p. 58).

In pharmaceutical QC laboratories, for example, rule-based tasks are common.

Specifically, written procedures describe the expected task, and contingency processes define how to troubleshoot or report unexpected outcomes. In addition, periodic training refreshes understanding and reinforces compliance. Well-designed procedures and robust training decrease failures at the rule-based level because “the cognitive system is extremely good at modeling and internalizing the useful regularities of the past and then reapplying them whenever their ‘calling conditions’ are supplied by intentional activities or the environment” (Reason, 1990/2003, p. 51).

Compared to knowledge-based tasks, errors at the rule-based level are less common. Nevertheless, errors at the rule-based level occur due to latent system issues such as poor training, awkward procedures, or complicated policies. According to

78 Reason (1990/2003), these gaps manifest as failures when personnel do the following

(pp. 43, 57, 74-86, & 96):

 Misread environmental cues (most common),

 Recall the incorrect procedure,

 Apply good rules to the incorrect scenario, or

 Apply incorrect rules because of their lack of experience or understanding about the situation.

Whereas knowledge-based errors occur due to the absence of experience, rule- based errors are “anticipated, either as the result of past encounters or because they are considered as likely possibilities by instructors or designers” (Reason, 1990/2003, p. 61).

In contrast to knowledge-based errors, however, rule-based thinking allows its adherents to advance using known rules in “strong-but-wrong routines” (Reason, 1990/2003, p. 57).

More specifically, even when the correct rules are available, rule-based errors can occur due to the selection of a good rule but at the wrong time or during the wrong circumstances. In failure investigations, for example, the rule-based thinking causes investigators “to search for symptoms and …create possible event scenarios at the same time” (Reason, 1990/2003, p. 94). This skews the investigative process as the investigator designs scenarios to fit the symptoms, “not because people lack the necessary creativity to generate scenarios, …but because they fail to apply strictly logical thinking to both the initial facts and to the products of scenario generation” (Reason,

1990/2003, pp. 94-95).

When rule-based failures occur, therefore, failure investigations must employ system thinking to evaluate support processes such as training, procedures, and policies.

79 This is necessary to determine if these processes enable accurate rule-based thinking, such as proper cue reading, procedure recall, and rule application. However, if the investigation suspects a special cause of variation (see Deming, 1982/2000, pp. 309-370), the investigation should evaluate its source and apply corrective actions to address the cause. For example, retraining may be appropriate after an evaluation of and improvements to the training program. After all, the initial and primary focus of the investigation should be to evaluate the adequacy of the system and its processes, the efficiency of established rules, and the quality of support structures, for rules and other

“contingency routines for handling” errors cannot always predict which and when errors will occur (Reason, 1990/2003, pp. 8, 61, 84, 96).

Similar to tasks performed at the rule-based level, skill-based tasks rely on “stored patterns of preprogrammed instructions,” which focus on “routine and non-problematic activities in familiar situations” (Reason, 1990/2003, pp. 43, 56). In contrast to rule-base actions, however, skill-based actions occur without conscious control and with

“attentional resources…not…focused on the routine task at hand” (Reason, 1990/2003, p.

56). Because of this difference, skill-based actions require the least effort and are not generally prone to tiring. Because of potential improvements to employee efficiency, therefore, organizations should seek to train their staff to use less knowledge-based thinking, some rule-based thinking, and more skill-based thinking.

In spite of the benefits of skill-based thinking, it is not without errors. Typically, these errors, which occur during routine monitoring and execution due to attentional checking, include:

80  Inattention during a critical point in the task, or

 Over-attention and subsequent misinterpretation of the status of the task, especially during an otherwise “automatized action sequence” (1990/2003, pp. 64, 86-96).

Skill-based errors, therefore, are based on the proper coordination of time and attention (Reason, 1990/2003, p. 43). Both types of skill-based errors “generally involve a necessary departure from some well-established routine” (Reason, 1990/2003, p. 60).

Instead of adhering to a well-known routine, for example, inattention causes the person to

“fail to bring the conscious workplace ‘into the loop’ at…critical points…[which] causes actions to run” beyond or short of the desired endpoint (Reason, 1990/2003, p. 95). This can occur because of a “failure to recall earlier situational changes” or when “knowledge relating to …changes was not accessed at the appropriate time” (Reason, 1990/2003, pp.

60-61). In comparison, over-attention involves inquiry based on higher-level processes

(e.g., rules or knowledge), and the task status “is assessed as either being further along or not as far as it actually is” (Reason, 1990/2003, p. 95). Both types of errors can occur due to distractions or other drains on cognitive resources. Because of this, skill-based mistakes also follow the “strong-but-wrong” approach but often result from thought processes near the original intent (Reason, 1990/2003, p. 57). This may include, for example, brushing one’s teeth with diaper cream instead of toothpaste, or adding pasta sauce to spaghetti before draining the water.

In summary, human error is inseparably connected to the three modes of error commission, namely knowledge-, rule-, and skill-based errors. Skill-based errors occur prior to conscious awareness of the problem and “are …mainly associated with

81 monitoring failures” (Reason, 1990/2003, pp. 56, 63). In contrast, knowledge- and rule- based errors occur after the detection of a problem and result from improper attempts to resolve the issue (Reason, 1990/2003, pp. 56, 63). Unlike the reliance on feedback in knowledge-based tasks, however, rule- and skill-based tasks operate “via feed-forward control emanating from stored knowledge structures…, [and] errors…occur while behavior is under the control of largely automatic units within the knowledge base”

(Reason, 1990/2003, p. 57). Because of this feedforward control, subject matter experts move toward rule- and then skill-based actions, for they are less error prone than knowledge-based actions even though “all three levels can coexist at any one time”

(Reason, 1990/2003, pp. 43, 58-59). Nevertheless, “the more skilled an individual is in carrying out a particular task, the more likely it is that …errors will take ‘strong-but- wrong’ forms” (Reason, 1990/2003, p. 58). Therefore, remedial training is ineffective because it fails to understand and effectively remediate the root cause of the error: human psychology.

So Why Use Humans?

The question, then, becomes “Why use humans if they are so error prone?”

Reason argues, “Errors are … the acceptance price human beings have to pay for their remarkable ability to cope with very difficult informational tasks quickly and, more often than not, effectively” (Reason, 1990/2003, p. 148). Organizations, therefore, rely on the

“remarkable [human] ability to simplify complex informational tasks” and “handle ‘non- design’ emergencies” that electronic devices cannot (Reason, 1990/2003, pp. 2, 182).

Unfortunately, organizations frequently take this ability for granted even when the

82 organization shares the blame for the error (see Swiss Cheese theory on page 54). This is especially true because stressed humans favor and perform better with rule- and skill-

based thinking rather than the methodical but more difficult knowledge-based reasoning

expected by confirmation and hindsight biases (Reason, 1990/2003, pp. 176, 182).

If humans are a necessary system component and “if it is impossible to guarantee

the elimination of error, then we must discover more effective ways of mitigating [the]

consequences [of errors] in unforgiving situations” (Reason, 1990/2003, p. 148). In

concert with discovering better ways to mitigate errors, organizations must improve their

training because “antiquated methods of training …[produce] people on the job that [sic]

do not know what the job is and are afraid to ask” (Deming, 1982/2000, p. 26). To

accomplish this, Rother (2009, p. 237) argues, “The field of psychology is clear on this:

we learn habits, automatic reactions, by repeatedly practicing behaviors.” However, the

solution is not merely to refine or increase the volume of training because, by itself,

“information is not knowledge” (Deming, 1994//2000, p. 106). Instead, what

organizations need is a change in culture to reject remedial training in favor of an

improved approach to identifying and evaluating the root cause of human errors.

This culture change, however, “is not achieved through books, intellect,

classroom training, discussions, or anything similar” (Rother, 2009, p. 237). In

implementing a change strategy, for example,

Executives who were most committed to the strategy, resisted the kinds of

sacrifices needed to carry it out. They treated the implementation process like a

routine checklist, and failed to address its more adaptive aspects – their corporate

83 culture, their old habits, and their instinctively defensive reactions to change.

(Weber, 2013, p. 185)

This resistance, of course, grows from the Western emphasis on short-term analysis instead of appreciating the system via synthesis (see page 28 of this thesis). However, this resistance effectively undermines any chance for improvement because “nothing undermines the communication of a change vision more than behavior on the part of key players that seems inconsistent with the vision” (Kotter, 1996, p. 97). What organizations need, therefore, is a tactic to combat resistance, “pull the ideas and perspectives of others into the decision-making process” (Weber, 2013, pp. 68-69), and produce the knowledge, desire, and ability needed for change (Hiatt, 2006).

84 CHAPTER 3

METHODOLOGY

The idea is to not stigmatize failures, but to learn from them. Mike Rother

Design of the Study

The purpose of this thesis was to identify a tactic to improve the framework by which organizations conduct failure investigations. Ultimately, the goal was to apply

Deming’s Appreciation for a System and Psychology to current practices for identifying root causes and selecting corrective actions. To accomplish this end, the research methodology used was a search of the available literature. This search extended to applicable areas of interest and included Boolean variations of the following terms and phrases:

 Root, cause, decision, deliberation, analysis, tool, technique, investigation, human, error;

 Pharmaceutical, quality system (QS), quality, FDA, regulatory, improvement, continuous, continuous improvement;

 Deming, system, system of profound knowledge, appreciation, psychology;

 Leadership, motivation, conversation, contribute, open, unbiased, discourse, benchmark, best practice, and change.

The literature search began with a review of scholarly, peer-reviewed articles published after 2011 and then extended to relevant books and articles published after

2006 in order to increase the available results and determine historical context. In addition, this thesis reviewed selected references of these articles and books. Search

85 results were limited to the English language using Google, Google Scholar, and the via the CSUDH Library portal.

This thesis limited the search to tactics that fit within the existing investigative framework required by regulatory agencies. Specifically, the purpose of this thesis was not to develop a new technique for root cause analysis or suggest changes to FDA requirements. Instead, this thesis favored tactics, which work within FDA requirements, resist remedial training, encourage continuous improvement, and align with Deming’s

Appreciation for a System and Psychology.

Because of the focus on a literature search, this thesis did not query human subjects or evaluate quantitative data. The search focused solely on the published literature, which allowed for the analysis of tactics to prevent blame-retrain in regulatory- mandated investigations. Because of the theoretical approach used, however, this thesis lacked empirical evidence. Future research, therefore, should evaluate organization- specific circumstances before implementing the suggestions contained herein.

Data Analysis Procedure

This thesis evaluated the results of the literature search against the discussion points described in the literature review. In addition, this thesis determined if organizations could employ the tactic within cGMP regulations and alongside current

RCA practices. Specifically, this thesis compared each tactic against the criteria that follow, according to the appreciation of the System, Psychology, and Current Good

Manufacturing Practices. This thesis then scored each tactic with a numerical score of 0

86 (no mention of the criteria), 1 (indirect mention or application), or 2 (direct mention or application). This thesis then tabulated the scoring.

Appreciation for a System

 Understands system structure, including stocks, flows, and related processes.

 Respects system boundaries but is willing and flexible to expand across multiple departments, academic disciplines, and periods of time.

 Helps define a well-balanced system aim that:

o Describes long-term expectations,

o Encourages honest disclosure,

o Develops organizational capability, and

o Encourages process-based leadership instead of managing by objectives.

 Seeks continuous improvement via resilience, self-organization, and appropriately designed hierarchies.

 Honestly acknowledges latent errors in the system.

Psychology

 Recognizes the mental limitations of numerical results.

 Focuses on the positive psychological aspects of leadership.

 Admits limitations and biases in detecting and perceiving errors.

 Permits for corrective actions to address the underlying causes of human error (e.g., knowledge-, rules-, and skill-based errors).

 Acknowledges the limitations of humans and aims to reduce these limitations through better training and an improved culture.

Current Good Manufacturing Practices

 Can use within cGMP regulations alongside current RCA practices.

87 CHAPTER 4

RESULTS AND DISCUSSION

The literature search was not able to locate an individual tactic that satisfied the specific application of Deming’s System of Profound Knowledge to resolve remedial training in pharmaceutical QC laboratories regulated by the FDA. This thesis concluded the absence of available literature on this topic is due to the strict regulatory environment, which tends to discourage organizations from publishing negative results that may increase regulatory scrutiny. In addition, because regulatory bodies require RCA investigations after failures, organizations tend to use RCA as “a governance tool and a way to re-establish organizational legitimacy in the aftermath of incidents” rather than a process for learning and improvement (Nicolini, Waring, & Mengis, 2011). Even if made available publicly, therefore, this thesis assumed the investigations would support the literature review discussed in Chapter 2.

In light of these initial results, this thesis expanded the search to include general approaches and other applications not specific to the FDA or pharmaceutical QC laboratories. A textual summary of the literature search results appears in Table 2. This thesis applied the numerical scoring criteria described in Chapter 3 to each search result.

The resultant scoring rubric appears in Table 3, with column headers for each search result from Table 2. Following the tables, Figure 1 provides a graphical summary of the total score of each search results, followed by brief summaries of important elements of each search result.

88 Table 2

Summary of Literature Search Results

Topic and Reference Summary Journal Sources a) Decision-making in Juries suffer from common psychological errors, but perform adequately Juries (Bornstein & when: Greene, 2011)  Provided pretrial instructions  Allowed to take notes and ask questions  Provided midtrial summaries and reminders  Debriefed after difficult cases b) Decision-making in Two approaches to decision-making include: Labor Cases (Ramiah  Informal, story-building models are used to summarize chronological & Banks, 2015) events and then select the most plausible story  Technical, analytical, and reductionist approaches are favored when detailed justifications are required c) Decision-making and Decision-making suffers after group conflicts: Intergroup Conflicts  Conflicts produce emotional states that alter subsequent decisions (Martínez-Tur,  Individual and group decisions suffer due to decreased rationality and Peñarroja, Serrano, cooperation after conflicts Hidalgo, Moliner,  Group deliberation partially corrects emotional states and restores Salvador, ... & Molina, rationality and cooperation 2014) d) Decision-making in Recommendations to offset bias and other shortcomings in intelligence Intelligence Analysis analysis: (Puvathingal &  Design externally valid studies Hantula, 2012)  Disseminate research  Collaborate  Centralize research  Employ research from other areas of psychology e) Decision-making and Organizations favor risk assessment (harm aversion) instead of risk Risk Management management (balancing risk positives and negatives). Outlined three (Carson, 2012) topics for reviewers/investigators:  What is reviewed  How the review is undertaken  Assuring individual and organizational learning

89 Table 2 (Continued)

Topic and Reference Summary Journal Sources f) Improvements to RCA RCA is used as a “bureaucratic mode of legitimation and governance.” for Healthcare Offered different approach: (Nicolini, Waring, &  Actively encourage disagreements; shift RCA investigations away Mengis, 2011) from centralized groups  Implement more structured but simplified RCA; encourage narrative thinking; encourage change, not analysis  Analyze recurrent system trends via cross-department groups  Encourage investigators to facilitate organizational development and learning g) Improvements to RCA There are 8 problems with current RCA approach and 5 steps to resolve it: for Healthcare  Establish cross-functional, investigatory bodies (Peerally, Carr, Waring,  Involve the perspectives of affected families during the investigation & Dixon-Woods, 2016)  Increase understanding of blame culture, and recognize all responsibilities, including need for organizational learning  Increase aggregate analysis at departmental, organizational, and national levels  Proactively detect hazards and assess risks via other tools (e.g., FMEA) h) Improvements to RCA Fishbone diagram expanded to include barrier analysis via new Lovebug – Barrier Analysis diagram (Card, 2013) i) Psychological distance Psychological distance from a problem is necessary to perceive latent in the Perception of causal factors over front-line consequences Cause (Rim, Hansen, & Trope, 2013) Book Sources j) Thinking in Systems Twelve leverage points to intervene in a system (Meadows, 2008, pp. 145-165) k) Lean Management, Copying external practices or quality systems will always fail because it Toyota Kata (Rother, focuses on results (e.g., cannot implement Lean via benchmarking). 2009) Describes Toyota’s behavioral method (kata) for discussing and implementing change l) Conversational All conversations fail under pressure in spite of good intentions, structure, Capacity (Weber, 2013) and technical expertise. Instead, the ability to seek candor and curiosity will produce conversations with sufficient capacity to handle any topic. Note. Developed by the author of this thesis.

90 Table 3

Evaluation of Literature Search Results

Evaluated Score Book Literature Sources Sources

Decision-making RCA PsyD

Criterion a b c d e f g h i j k l

Appreciation for the System

System structure 0 0 0 0 1 0 1 1 0 2 1 0

System boundaries 0 0 0 0 1 1 1 0 1 2 2 1

System aim 1 1 1 1 1 1 1 1 1 2 2 1 Describes long-term – – – – – – – – – + ++ – expectations Encourages honest ++ ++ ++ + – ++ + + + + + ++ disclosure Develops organizational – – + + ++ + + + – + ++ ++ capability Process-based + – – + ++ + + + – + ++ + leadership not MBO

Continuous improvement 0 0 1 1 1 2 2 0 1 2 2 1 Acknowledges latent 1 0 1 1 2 0 1 1 1 2 1 2 errors

Psychology Recognizes limitations of 0 0 0 0 1 0 0 0 0 1 2 0 numerical results

Focuses on leadership 0 0 0 1 1 0 1 0 1 2 2 2 Ack. limits/biases in 2 0 1 2 2 0 0 0 2 1 1 2 detect/perceive error Corrective actions for 0 0 1 0 1 1 0 0 0 1 2 0 human error causes Ack. human limits; seeks 2 0 2 2 2 1 1 0 1 2 2 2 to improve

GMP Can use under cGMP and 2 2 2 0 2 2 0 2 2 1 1 2 current RCA

Total (maximum 22) 8 3 9 8 15 8 8 3 10 18 18 13 Note. Developed by the author of this thesis. RCA = Root Cause Analysis. Psy D = Psychological Distance. Dash refers to no mention of this element. A single plus sign refers to some mention of the element. A double plus sign refers to explicit and detailed explanation of this element. Developed by the author of this thesis.

91

Appreciation for the system (sum)

a Psychology (sum) GMP b

c

d

e

Source

f

g

h Literature

i

j

k

l

0 3 6 9 12 15 18 21 Total Score

Figure 1: Graphical Summary of the Evaluation of the Literature Search Results. Note. Developed by the author of this thesis.

Discussion of Results

As shown in Tables 2 and 3, results were grouped according to the reference type

(journal article or book) and topic or title of the reference. Regarding the peer-reviewed journal sources, the topic of the first group (sources a through e) was Decision-making, the topic of the second (sources f through h) was Root Cause Analysis, and the final

(source i) was a single article on the topic of Psychological Distance. The three book sources (j through l) were grouped based on the type of source (book) but separated from

92 all others using the subsection title followed by the title of the book (e.g., Book Source –

Thinking in Systems).

Decision-Making

As shown in Table 2, this thesis identified five sources (a through e), related to decision-making. In failure investigations, effective decision-making could correct the emphasis on remedial training, especially through system understanding.

The first of the decision-making sources was Bornstein and Greene (2011), who discussed jury service and related a jury decision-making process to improved understanding of psychology. Jury service could relate well to human error investigations in pharmaceutical QC laboratories because investigations, like jury members, might face complex and contradictory information, which could affect informed conclusions. For example, Bornstein and Greene (2011) compared juries to an organization whose untrained members make decisions in spite of hindsight bias and other psychological errors. To reduce biased outcomes, the authors summarized suggestions for trial enhancements that included the following (Bornstein and Green,

2011, p. 66):

 Provide retrial instructions for jurors.

 Encourage note-taking and juror questions.

 Summarize proceedings and instructions during the trial.

 Debrief jurors after difficult cases.

93 Applied to failure investigations, pharmaceutical QC laboratories could provide these enhancements for investigators and staff under investigation. When faced with reluctant or apathetic staff, however, this approach might fail to accomplish its goals.

The second of the decision-making sources related to resolving labor disputes.

Similar to Reason’s knowledge- and rule-based thinking, Ramiah and Banks (2015)

described decision formulation in real-world scenarios, including time-pressured choices

based on intuition or pattern recognition in novel or familiar situations. In contrast to

informal, story-building models to summarize chronological events and then to select the

most plausible story, Ramiah and Banks (2015) argued that technical, analytical, and

reductionist approaches helped establish detailed reasoning and justifications. The

authors claimed that experts used appropriate justifications and reason-based arguments,

and consequently were more accurate than less experienced officials. Much like the

Ackoff’s description of analytical thinking, Ramiah and Banks (2015) argued for the

analytical approach because well-founded arguments apply fact-checking to explain data

inconsistencies and support decision-making. This approach was similar to the current

decision-making process in pharmaceutical QC laboratories, albeit without the benefit of

root cause analysis.

The third decision-making source examined decision-making in spite of

intergroup conflicts. The authors studied the frequency and effect of intergroup conflicts

due to human’s ability to distrust, discriminate, and segregate (Martínez-Tur, Peñarroja,

Serrano, Hidalgo, Moliner, Salvador, . . . & Molina, 2014). This study showed that when

preceded by emotionally charged conflicts, rational thinking in groups decreased

94 (Martínez-Tur et al., 2014). In spite of the innate connection between conflict and negative emotion, groups could restore rationality if its members first corrected emotional states via group discussion and deliberation (Martínez-Tur et al., 2014). However, deliberative thinking proved to be effortful and slow (Martínez-Tur et al., 2014), much like Reason’s (2003) description of knowledge-based thinking.

The fourth source evaluated decision-making in intelligence analysis.

Puvathingal and Hantula (2012) examined the consequences of “information overload and severe time constraints” (p. 199) on the quality of rational decisions in intelligence analysis. These authors described the lack of empirical evidence to mitigate biases in intelligence analysis and argued for a remedial approach as follows (Puvathingal &

Hantula, 2012):

 Design externally valid studies.

 Disseminate research.

 Collaborate.

 Centralize research.

 Employ research from other areas of psychology.

As observed during the literature search for this thesis, the lack of empirical evidence was similar to the lack of literature on remedial training. However, as discussed in the beginning of Chapter 4, a strict regulatory environment might discourage honest disclosure of negative results, especially if organizations were wary of increased scrutiny.

The last of the decision-making sources related to risk management in police work. Carson (2012) summarized the risk decisions of professionals, and argued for

95 actively studying and managing the positive and negative aspects of risk, rather than defaulting to harm aversion. In addition to the standard incident analysis, Carson (2012)

proposed an extensive process organized under the following three steps:

 What to review

o Risk management plans and resources available to control implementation

o Quality of the risk management program and managerial support for continuous improvement

 How to review

o Explain and justify analysis of relevant incident data

o Explain time and resource bias versus when risk was taken (e.g., hindsight bias, fundamental attribution error)

o Describe perceived system structure, root cause analysis, and separation of cause and blame

 Consider continuous improvement via learning

o Assess management systems that support continuous improvement

o Identify opportunities for learning by all relevant groups, including management

As shown in Table 3, Carson’s review guidelines scored the highest of the

decision-making sources. Nevertheless, possible improvement to the process included

the consideration of long-term expectations and specific adaptations to pharmaceutical

QC laboratories for those elements with a score of unity.

96 Improvements to Root Cause Analysis

As stated in Chapter 1, a detailed evaluation of RCA methods was beyond the scope of this thesis. Instead, this thesis identified three sources (f through h) related to improvements to the root cause analysis process (as shown in Table 2).

The first of the RCA sources examined investigations in healthcare organizations.

Nicolini, Waring, and Mengis (2011, p. 217) criticized the current approach to RCA, arguing that it was used as a “bureaucratic mode of legitimation and governance” rather than a tool for learning and continuous improvement. To remedy this, the authors proposed the following modified approach (p. 224):

 Actively encourage disagreements in discussion;

 Shift RCA investigations away from centralized groups toward front-line staff;

 Implement more structured but simplified RCA;

 Encourage narrative thinking;

 Encourage change, not analysis;

 Analyze recurrent system trends via cross-department groups;

 Encourage investigators to facilitate organizational development and learning;

This approach may provide an improvement to organizations constrained by departmental silos. However, shifting investigations toward front-line staff contradicts

Deming's call for an outside perspective (1994/2000). In addition, these encouragements are not explicitly defined, are not sufficiently focused on system issues, and may devolve into meaningless motivational slogans (Deming, 1994/2000).

97 The second of the sources on RCA improvements also analyzed healthcare investigations. Peerally, Carr, Waring, and Dixon-Woods (2016) outlined eight problems with the current RCA process, which included the following (pp. 417-419):

 “The unhealthy quest for [a single] root cause;

 “Questionable quality of RCA investigations;

 “Political hijack;

 “Poorly designed or implemented risk controls;

 “Poorly functioning feedback loops;

 “Disaggregated analysis focused on single organisations and incidents;

 “Confusion about blame; and

 “The problem of many hands.”

To counter these problems, these authors suggested five steps:

 Establish professional, cross-functional, investigatory bodies;

 Include the perspectives of affected persons during the investigation (e.g., customers and other third parties who may have witnessed errors);

 Increase understanding of blame culture, and recognize all responsibilities, including need for organizational learning;

 Increase aggregate analysis at departmental, organizational, and national levels; and

 Proactively detect hazards and assess risks via other tools (e.g., FMEA).

In theory, these suggestions would help drive improvements to failure investigations. In contrast to the first of the RCA improvement sources, establishing professional investigatory bodies agreed with Deming's outside perspective (1994/2000).

98 As stated previously, however, regulatory pressure tended to discourage organizations from openly disclosing negative results. Therefore, the FDA and other regulatory bodies would have to change regulatory policy to facilitate this process (for example, see FDA,

2015; 2016).

The last of the sources on RCA improvement discussed improvements to a specific RCA tool. Card (2013) proposed supplementing the fishbone diagram with barrier analysis, creating a hybrid diagram, which considers factors that support and those that impede failures. Applied to pharmaceutical QC laboratories, this enhancement would require less training because of the current familiarity of fishbone diagrams. In addition, this approach could prompt the investigation to evaluate the appropriateness of system barriers. However, Card (2013) admitted the tool was not sufficiently systems- focused and therefore suboptimal.

Psychological Distance

This thesis identified an additional approach (source i) for improving failure investigations via the psychology of causal thinking. Rim, Hansen, and Trope (2013) discussed the focus of humans under stress and argued that increased temporal, spatial, social, and psychological distance from the event allows for improved causal thinking. In the course of their research, the authors discovered that causal thinking promotes more abstract thinking, which further supports causal thinking. Because of this, Rim, Hansen, and Trope (2013) recommended the following:

 Increase temporal distance as much as possible

 Increase social and psychological distance by allowing causal thinking by uninvolved parties

99

 Imaging the event as occurring in the future

 Encourage causal thinking before making judgments

Applied to pharmaceutical QC laboratories, for example, third-party auditors are temporally and socially distant. When reviewing investigations and corrective actions, therefore, auditors are able to see latent causes and use system thinking more effectively, which coincides with Deming’s admonition for an outside perspective (1994/2000). In contrast, laboratory personnel are socially and psychologically proximal and tend to focus more on consequences. Albeit less so than front-line employees and supervisors, even dedicated internal investigators are socially and psychologically proximal to a degree and, therefore, are susceptible to focus on consequences.

A side effect of psychological distance, however, would “focusing on causes versus consequences leads participants to perceive events as more distant in time” (Rim,

Hansen, & Trope, 2013, p. 464). Reconciling this side effect, therefore, would require two separate groups for handling investigations: one group that is psychologically distant and focuses on RCA and a second that is psychologically proximal and focuses on risk assessment.

Book Source—Thinking in Systems

As shown in Table 2, this thesis included three book sources (j through l) in its search for improvements to the failure investigations in pharmaceutical QC laboratories.

The first of these sources focused on systems theory, the second on the Toyota

Production System, and the last on conversational soft skills.

100 In Chapters 1 and 2, this thesis cited Thinking in Systems by Donella Meadows

(2008) as a source for understanding of system thinking and system behavior. In the

latter half of this reference, Meadows provided 12 leverage points for intervening in a system. In order of increasing effectiveness, these leverage points are as follows

(Meadows, 2008, pp. 145-165):

1. “Numbers: Constants and parameters such as subsidies, taxes, standards…the size of the flows…and how quickly those numbers can be changed;

2. “Buffers: The sizes of stabilizing stocks relative to their flows;

3. “Stock-and-Flow Structures: Physical systems and their notes of intersections;

4. “Delays: The lengths of times relative to the rates of system changes;

5. “Balancing Feedback Loop: The strength of the feedbacks relative to the impacts they are trying to correct;

6. “Reinforcing Feedback Loops: The strength of the gain of driving loops;

7. “Information Flows: The structure of who does and does not have access to information;

8. “Rules: Incentives, punishments, constraints;

9. “Self-organization: The power to add, change, or evolve system structure;

10. “Goals: The purpose or function of the system;

11. “Paradigms: The mind-set of which the system—Its goals, structure, rules, delays, parameters—arises; and

12. “Transcending Paradigms,”

Leverage points 7 through 12 align with current practices in laboratory investigations.

For example, an organization might consider a system of the total rate of human error

committed by trained employees (stock). Some inflows could include individual rates of

101 error based on the number and quality of current employees, rate of hiring, quality of training, and number of assigned tasks. Some outflows could include attempts to decrease the rate of errors via retraining, reprimands, and attrition of error-prone employees. In attempt to reduce the error rate, the organization may attempt the following actions:

 Leverage 12: Change the hiring rate, the training frequency, the employee count, or the number of tasks assigned per employee,

 Leverage 11: Increase the quality of the training program or the hiring pool,

 Leverage 10: Modify hiring or training policy or reporting structure,

 Leverage 9: Change the frequency of error review and reward/reprimand delivery,

 Leverage 8: Change the strength of reprimands or firing policies, and

 Leverage 7: Change the strength of rewards for low error rates or successes in hiring or training.

Leverage Points 7 through 12, however, are at the bottom of the list, and, therefore, are less likely to supply the power to overturn the tendency to blame employees for system problems.

Starting with Leverage Point 6, Meadows (2008) outlined more effective means for system change. However, Leverage Points 5 down to 1 would be increasingly difficult to implement because of the current regulatory structure. For example, in the hypothetical laboratory system with the human error rate as the stock, an organization could attempt the following changes:

 Leverage 6: Change how errors are understood, reported, and discussed because “missing information flows is one of the most common causes of

102 system malfunction …[and] can be a powerful intervention, usually much easier and cheaper than rebuilding physical infrastructure” (Meadows, 2008, p. 157). While this may be relatively easy to implement, it may be risky for organizations fearful of increased regulatory scrutiny.

 Leverage 5: Change the rules regarding hiring, training, error investigation, reporting, and attrition. This is effective because, “power over the rules is real power” (Meadows, 2008, p. 158). As discussed in Improvements to Root Cause Analysis starting on page 96 of this thesis, this is possible in a limited capacity if organizations modify the personnel and process for conducting RCA investigations. As stated previously however, full implementation is not possible given current regulatory expectations.

 Leverage 4: Empower all employees to suggest and make changes to the system. “The ability to self-organize is the strongest form of system resilience” (Meadows, 2008, p. 159). Instead of separate departments, empowered employees through idealized redesign can modify task allocations, operating procedures, and organizational structures in response to changing needs. However, current regulations prohibit changes to most systems without thorough justification and validation. Therefore, organizations may not be able to implement this leverage point without regulatory changes.

 Leverage 3: Change the purpose or function of failure investigations and all systems that influence it (Meadows, 2008, p. 161). As discussed in Improvements to Root Cause Analysis, the current RCA approach favors bureaucracy over learning (Nicolini, Waring, & Mengis, 2011). Changing a system’s aim to emphasize long-term, continuous improvement can reduce the prevalence of remedial training significantly. However, while change can come from regulatory guidance (for example, see FDA, 2015), without inspired leadership, the change can fall victim to the pressures of political agendas, quarterly profits, and other short-term results.

 Leverage 2: Change the paradigm, which forms RCA, perspective of error, and the system as a whole (Meadows, 2008, p. 162). This, of course, relates to Deming’s attempt the reeducate management and government via his 14 Points and the System of Profound Knowledge. However, complete change is outside the scope of improving RCA investigations because it cannot be accomplished without global support.

 Leverage 1: In addition, outside of the scope of RCA investigations, transcending paradigms can entail questioning even Deming’s perspectives, which is more suitable in long-term discussions of quality theory (Meadows, 2008, p. 164).

103

With regard to the leverage points, however, Meadows (2008) warned against simplifying systems because they are often too complex for simple approaches. In summary, the leverage points discussed in Thinking in Systems are powerful resources to understand and improve system function, but significant regulatory change would be required to fully implement them.

Book Source—Toyota Kata

The second book source summarized the Toyota Production System (TPS), but, in a paradigm-changing approach, challenged how organizations understood and implemented Toyota’s methods. Specifically, while TPS tools are nearly ubiquitous, outside organizations continue to underperform because of “the results-oriented level of thinking inherent …in the Western world” (Johnson, 2009, pp. vii-ix). In assessing the success of TPS implementation, Rother criticizes the benchmarking trap of comparing to

Toyota and their results. Akin to Meadows’ warning about making generalizations of complex systems, Rother argues against benchmarking because TPS processes “are built upon invisible routines of thinking and acting …that differ significantly from those found in most companies. (Rother, 2009, p. 4). For example, in contrast to reserving corrective actions for RCA investigations, “At Toyota, improvement and adaptation are systematic and the method is a fundamental component of every task performed, not an add-on or a special initiative” (Rother, 2009, p. 14).

According to Rother, Toyota follows a learned process for evaluating and acting on change. While presented in a list format for the sake of this thesis, the process is not an action list, nor is it benchmarking by copying results or adopting countermeasures

104 without process understanding (Rother, 2009). Instead, Toyota employs behavior

patterns and routines, known as kata, for dealing with dynamic conditions common in organizations (Rother, 2009). Rather than a specific procedure applies to static events, the improvement kata “gives members of the organization an approach, a means, for handling an infinite variety of situations” (Rother, 2009, p. 16) because Toyota

recognizes, “the most important factor that makes [them] successful is the skill and

actions of all the people in the organization” (Rother, 2009, p. 13). When presented with

opportunities for improvement, Rother (2009) described the improvement kata as

follows:

1. Understand organizational direction (e.g., vision)

2. Acknowledge and understand the current process state

3. Select a process target condition which,

o Works towards the organizational direction

o Works within organization constraints to achieve the target condition

o Focuses on process improvements that reveal problems (e.g., heijunka)

o Includes opportunities for organizational improvement

4. Move toward the target condition via Plan-Do-Check-Act, because

o Evolution requires experimentation, which permits use of the scientific method and generates learning opportunities.

Overall, the routine is what matters, and “ultimately you should be able to walk

through the factory and at each process ask, ‘What challenge’ – target condition – ‘are

you currently trying to reach here?’” (Rother, 2009, p. 116). For example, Toyota not

only uses common Lean techniques such as leveling patterns and Kanban routines as

105 target conditions, but also as means to achieve the target condition and reveal process

problems (Rother, 2009). However, target conditions are not targets, which are merely outcomes and results (Rother, 2009, p. 103), nor are they randomly and haphazardly

benchmarked countermeasures because “you have to learn to hold yourself back and first

define where you want to go before you get started on moving there” (Rother, 2009, p.

117). Instead, “a target condition is a description of a process operating in a way”

(Rother, 2009, p. 103) and can be “technical and nontechnical …[but] at least some

aspects …should be measurable” (Rother, 2009, p. 117). Nevertheless, while numeric

values and endpoints are important, “even more important are the means by which we

achieve those targets” (Rother, 2009, p. 104). For example, “financial calculations alone

[cannot] determine direction …[because] an economic break-even point is a dependent

variable, not an independent constraint that determines direction” (Rother, 2009, p. 52).

After all,

The primary intention of specifying standards at Toyota is not, by doing so, to

establish discipline, accountability, or control the workers, but rather to have a

reference point; to make plan-versus-actual comparison possible, in this case by

the team leader, so that gaps between what is expected and what is actually

occurring become apparent. (Rother, 2009, p. 114)

However, pharmaceutical QC laboratories may not be able to fully implement

Toyota Kata. In particular, PDCA and Toyota Kata rely on “temporary measures to

contain the abnormal occurrence until the root cause can be addressed” and “changes [to]

only one factor at a time, so you can see correlation” (Rother, 2009, p. 194). The FDA

106 expects robust corrective actions; therefore, it is not acceptable to conduct one-factor experiments that do not address all possible causes. A duplicate laboratory can be used to propose and conduct experiments, but maintaining qualified but unused analytical equipment may be cost prohibitive. However, Rother appears to be mindful of this when he argues, “If the process is not stable or is unable to meet customer quality or quantity requirements, address this before trying to make other improvement” (2009, p. 126).

Nevertheless, even if experiments can be executed on active equipment, any action must be approved by the laboratory quality unit prior to implementation. Of course, this can demand an increase in turnaround of investigation, documentation, and approval processes, which would can costs and perhaps increase errors.

Book Source—Conversational Capacity

In his discussion of SoPK and psychology, Deming admonished, “People are different from one another. A manager of people must be aware of these differences, and use them for optimization of everybody’s abilities and inclinations” (Deming, 1994/2000, p. 108). The last of the book sources, Weber’s Conversational Capacity, acknowledges these differences and offers a discipline to “help build teams that are increasingly healthy, sustainable, and effective” by increasing the capacity of conversations to work through conflict (Weber, 2013, pp. 2, 15). Termed conversational capacity, or “the ability to have open, balanced, non-defensive dialogue about tough subjects and in challenging circumstances” (2013, p. 15), Weber contends, “We know we’re communicating in an open, balanced, non-defensive way when there is balance between candor and curiosity.

We don’t mind sharing our ideas and perspectives, and we’re equally interested in

107 exploring the ideas and perspectives of others” (2013, p. 16). In contrast to “un- discussable issues [or] unproductive issues …where the capacity is sufficient, you can remain balanced and do good work …[in spite of] conflict or tension …[because] your team’s return on conversation is high” (Weber, 2013, pp. 17-18).

While theoretically simple, “it requires tremendous energy to break free of the gravitational pull of our preprogrammed responses and establish a new behavioral trajectory” (Weber, 2013, p. 63). For example, Weber describes typical conversational behaviors as “primal biological imperatives …[that] provoke us to act in automatic ways

[and] violate our otherwise good intentions” (Weber, 2013, pp. 36-37). In particular, he describes “two troublesome tendencies” that discourage honest disclosure: minimize and win behaviors (pp. 38-39, 44). To minimize, he contends, is “the conversational manifestation of the flight response …[and favors] caution at the expense of candor” (pp.

39-40). Conversely, candor is firmly anchored in conversational conflict, and

“feels a passionate need to save [others] from their mistakes …[by] raising your voice, putting forward your view in forceful terms, discounting the logic of others, and arguing with anyone who dares to disagree” (Weber, 2013, pp. 44-45).

These tendencies are fundamentally flawed, Weber contends, because, “Given our brain’s fervent predilection for ego-friendly ideas and information, when we make a decision based solely on our own perspective; it’s not an informed choice—it’s a biased one” (Weber, 2013, p. 64). Nevertheless, Weber warns,

“Given all the mischief they cause, it’s tempting to pathologize our primal

tendencies and treat them as unadulterated evils. But that would be unwise, …it’s

108 not the behaviors themselves we should be concerned about, but the alignment

between our behaviors and our intentions, and the fight-or-flight reactions that so

easily pull them apart. (Weber, 2013, p. 58)

According to Weber, aligning behaviors and intention via high conversational capacity is achieved through the mindful use of the following four distinct skills that are focused on candor and curiosity, which are extremely difficult to balance under pressure:

 “Candor Skill 1: State a direct, succinct position … so that others are less likely to misunderstand the idea we’re trying to communicate;

 “Candor Skill 2: Explain the underlying thinking that informs our position … [and] articulates two things: The data we’re paying attention to, and how we’re interpreting that data;

 “Curiosity Skill 1: Test our perspective…especially people with contrasting perspectives” by presenting our views “as hypotheses to check and improve rather than truths to protect and sell;

 “Curiosity Skill 2: Inquire genuinely into the views of others…striking a balance between candor and curiosity…[which encourages] other people to treat their views responsibly” (Weber, 2013, pp. 78-79, 81, 83, 86-87, 91-93).

This process, Weber explains, “dramatically improves our conversations and meeting

…[by focusing on] informed choice [and] responding to conflict, dissent, or disagreement in an open, balanced, [and] learning-focused way” (2013, pp. 100-101).

Because not all conversations require the rigor of a formal method, Weber advises, “The more important and challenging the conversation we’re facing, the higher the conversational capacity we need to engage it” (pp. 106, 109). In pharmaceutical QC laboratories, for example, a team can select a caterer for an employee appreciation dinner with less conversational capacity than a failure investigation into human error that can

109 lead to employee discipline and termination. With this in mind, Weber (2013) offers the following points of emphasis:

 “Learning to recognize our tendencies and triggers provides us with two distinct advantages. First… we can’t manage something we can’t see… Second, awareness …generates more empathy, a more compassionate view of people who behave in ways that we don’t understand” (p. 58);

 “It’s not a popularity contest, and all views are not equal. The more rigorous, logical, and thoughtful the view, the more weight it carries. And the more expert the source, the more credence we should give the perspective” (p. 85);

 “Unlike a debate, it’s not necessary to have our thinking flawlessly arranged and articulated in order to engage with others” (p. 86);

 “These skills are a way of orchestrating balanced dialogue; they’re not a decision-making process … balanced dialogue is not about talking until everyone on the team reaches agreement; it’s about helping the person making the decision make the most informed and effective choice possible” (pp. 112, 172); and

 “All team members do not have to be equally skilled for the conversational capacity of a team to increase; nor is it dependent on everyone having the same level of commitment to using the skills… One or two people can have a dramatic impact on a team’s performance” (p. 178).

Overall, conversational capacity is a soft skill used for discussing sensitive issues that are constrained by difficulty, expense, timelines, or regulations. In contrast to tactics focused on gathering empirical evidence in order to make decisions (e.g., PDCA in

Toyota Kata), high conversational capacity facilitates thought experiments prior to task execution. This is especially helpful in regulated industries like pharmaceutical QC laboratories where teams within an organizational cannot change validated systems and processes without prior approval and justification. For example, with high conversational capacity, an organization can overcome stagnating issues such as blame-retrain but remain compliant to investigational timelines. After all, conversational capacity teaches,

110 “Achieving balance is the objective… we’re more interested in exploring divergent ideas that in being comfortable or right [because] informed decision-making is the superordinate goal …[such that] everyone who helps orchestrate a conversation that leads to the best idea, wins” (Weber, 2013, pp. 106, 113, 119).

111 CHAPTER 5

SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS

We know a tremendous amount about how the world works, but not nearly enough. Our knowledge is amazing; our ignorance even more so. Donella Meadows

When failures occur, quality-minded organizations conduct and document investigations to summarize findings, justify conclusions, and recommend actions to remedy the failure. In pharmaceutical QC laboratories, the FDA regulates these investigations by requiring corrective actions that address the root causes or fundamental contributing factors of failures (FDA, 2009; Vinnem et al., 2010). The approaches and outcomes vary, but effective investigations help demonstrate a “robust quality culture”

(FDA, 2015, p. 12).

With regard to failures influenced by human error, remedial training for the involved person is a common approach. Generally, this approach assumes the human operator has caused the failure; therefore, the operator deserves the blame. While not explicitly prohibited, recent documents published by the FDA suggest this approach is not acceptable.

To investigate this claim, a review of the extant literature was performed for this thesis to verify if two elements of Deming’s System of Profound Knowledge—

Psychology and Appreciation for a System—would apply to failure investigations in pharmaceutical QC laboratories.

112 Based on the literature review, this thesis discovered that remedial training failed to address latent system issues that influence the front-line errors. In addition, this thesis observed that remedial training did not understand the psychological factors that influenced error perception and commission. The review also suggested these failings were related to the history of quality management, with particular focus on Adam Smith and his efforts to segment the workforce, Frederick Taylor and his focus on increasing

productivity, and the concept of analytical thinking, which organizations used to segment

broad concepts and systems into simpler but incomplete ideas. Historical approaches

arose to counter the focus on segmentation and increased productivity. However, the

negative influences remained, demonstrated by the continued emphasis on managing by

sales goals, stock prices, and other short-term results at the expense of ethical behavior,

sustainability, and long-term profitability.

Aware of the historical influence of managing by results, this thesis sought tactics

aimed at countering the reliance on remedial training in pharmaceutical QC laboratories.

Using the results of the literature review, this thesis developed a scoring rubric to

evaluate potential tactics for their ability to satisfy Deming’s Psychology and

Appreciation for a System. This thesis conducted a literature search for prospective

approaches and then evaluated the approaches, using the scoring rubric.

Based on the evaluation presented in Table 3 and summarized in Figure 1, the following sources were identified as tactics of interest:

 Decision-making and risk management (Carson, 2012); score 15 out of 22

 Thinking in Systems (Meadows, 2008, pp. 145-165); score 18 out of 22

113  Lean Management, Toyota Kata (Rother, 2009); score 18 out of 22

 Conversational Capacity (Weber, 2013); score 13 out of 22

A summary of the common elements of these four highest scoring approaches are as follows:

 Investigators and decision makers must explain and justify their perspective and conclusions (Carson, 2012; Rother, 2009; Weber, 2013).

 Organizations must discover how to consistently encourage and receive honest, constructive input from all its members, because therein lies the strength of a successful organization (Meadows, 2008; Rother, 2009; Weber, 2013).

 Investigations must adopt a systemic perspective that does not blame individuals for system causes (Carson, 2012, Rother, 2009; Weber, 2013).

 Organizations must recognize biases, evaluate resource availability, and determine the level of managerial support for front-line decisions (Carson, 2012; Meadows, 2008; Rother, 2009; Weber, 2013).

 Investigations and other forms of inquiry must promote organizational learning and continuous improvement (Carson, 2012; Meadows, 2008; Rother, 2009; Weber, 2013).

In review, organizations can implement these elements so long as the implementation does not become overly bureaucratic or formal (Iedema et al., 2006). For example, organizations can evaluate their adherence to the elements via scored metrics, so long as the values do not become rating systems (see Deming, 1982/2000) or permit…

Managing from a distance through reported metrics [which] leads to overlooking

or obscuring small problems …and inhibits our ability to learn… There is no

combination of outcome metrics and incentive systems that by themselves will

generate continuous improvement and adaptation. (Rother, 2009, p. 166)

114 Admission of Biases and Flaws

There are several flaws and biases in this thesis. First, the scoring rubric is biased toward Toyota Kata and Thinking in Systems because these sources were cited in the literature review. Second, the scoring rubric lacks empirical justification, and this thesis did not employ empirical analysis of the results. Third, the elements suggested for implementation lack definitive detail. Specifically, the suggested elements are not precise tools or tactics, which may generate uncertainty or conflict with established organizational culture, much like Deming’s 14 Points or System of Profound Knowledge

(Evans & Lindsay, 2008/2014). Because of these flaws and biases, therefore, future research should use evaluate additional resources to build a scoring rubric, seek and validate data via empirical studies, and evaluate specific implements, especially with regard to organizational culture (Kotter, 1996).

Notwithstanding its biases and flaws, the aim of this thesis was to suggest improvements to the process of conducting investigations, not to resolve all known challenges. With this aim in mind, this thesis did not seek specific implements because benchmarking alone could not produce continuous improvement unless accompanied by changes to the thinking and practices, which define organizational culture (Rother, 2009).

In addition, “you can expose problems only to the degree that you can handle them” (Rother, 2009, p. 183). Therefore, organizations must select and adopt practices that fit their business culture and at the rate that the organization can change.

In spite of culture, however, “there’s no substitute for actually being informed”

(Weber, 2013, p. 65), even if some things are unknowable (Deming, 1982/2000). In

115 addition, the business world is impatient, increasingly competitive (Kotter, 1996), and not forgiving of organizations that ask the wrong questions (ACASA, 1992/2011).

Overcoming these challenges requires that organizations “value candid discussions” over false politeness, passive-aggressive diplomacy, and “killing-the-messenger-of-bad-news”

(Kotter, 1996, p. 163). This will not be easy, but entirely possible if “we …subordinate our base predilections to a higher set of values, countering our arrogance with humility, our certainty with curiosity, our caution with candor, and our timidity with courage”

(Weber, 2013, pp. 65, 204).

REFERENCES

117 REFERENCES

Ackoff Center for Advancement of Systems Approaches. (2011). A conversation

between Russell Ackoff and Edward Deming. Retrieved from

http://ackoffcenter.blogs.com/ackoff_center_weblog/2011/04/a-converstaion-

between-russell-ackoff-and-edward-deming.html

21 C.F.R. § 211 (2014). Current good manufacturing practice for finished

pharmaceuticals. Retrieved from

https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfCFR/CFRSearch.cfm?CFR

Part=211&showFR=1

American Society for Quality. (n.d.). About ASQ - W. Edwards Deming. Retrieved from

http://asq.org/about-asq/who-we-are/bio_deming.html

American Society for Quality. (n.d.). Beyond total quality. Retrieved from

http://asq.org/learn-about-quality/history-of-quality/overview/beyond-total-

quality.html

American Society for Quality. (n.d.). History of quality. Retrieved from

http://asq.org/learn-about-quality/history-of-quality/overview/overview.html

American Society for Quality. (n.d.). What Is a quality management system (QMS)? ISO

9001 & other quality management systems. Retrieved from http://asq.org/learn-

about-quality/quality-management-system/

American Society for Quality. (n.d.). What is root cause analysis (RCA)? Retrieved

from http://asq.org/learn-about-quality/root-cause-

analysis/overview/overview.html

118 Baris, E. (2015). Expatriates in South Korea and their perception of commitment to

quality (Master's thesis, San Diego State University). Retrieved from

http://scholarworks.calstate.edu/bitstream/handle/10211.3/140687/Baris_sdsu_02

20N_10710.pdf?sequence=1

Barouch, G., & Kleinhans, S. (2015). Learning from criticisms of quality management.

International Journal of Quality and Service Sciences, 7(2/3), 201-216. Retrieved

from http://dx.doi.org/10.1108/IJQSS-02-2015-0026

Baxter, G., & Sommerville, I. (2011). Socio-technical systems: From design methods to

. Interacting with Computers, 23(1), 4-17. Retrieved from

http://dx.doi.org/10.1016/j.intcom.2010.07.003

Becker, C., & Glascoff, M. (2014). Process measures: A leadership tool for management.

The TQM Journal, 26(1), 50-62. Retrieved from http://dx.doi.org/10.1108/TQM-

02-2013-0018

Bennett, B., & Provost, L. (2015). What's your theory? Quality Progress, 48(7), 36.

Retrieved from http://asq.org/quality-progress/2015/07/continuous-

improvement/whats-your-theory.html

Bornstein, B. H., & Greene, E. (2011). Jury decision making: Implications for and from

psychology. Current Directions in Psychological Science, 20(1), 63-67. Retrieved

from http://dx.doi.org/10.1177/0963721410397282

Caraco Pharmaceutical Laboratories LTD. (2009, June 19). Untitled form response.

Retrieved from http://www.fda.gov/ucm/groups/fdagov-public/@fdagov-afda-

orgs/documents/document/ucm172108.pdf

119 Card, A. J. (2013). A new tool for hazard analysis and force-field analysis: The lovebug

diagram. Clinical Risk, 19(4-5), 87-92. Retrieved from

http://dx.doi.org/10.1177/1356262213510855

Carson, D. (2012). Reviewing reviews of professionals’ risk-taking decisions. Journal of

Social Welfare and Family Law, 34(4), 395-409. Retrieved from

http://dx.doi.org/10.1080/09649069.2012.753729

Chaloner-Larsson, G., Anderson, R., & Egan, A. (1997, January). A WHO guide to good

manufacturing practice (GMP) requirements. World Health Organization.

Retrieved from

http://apps.who.int/iris/bitstream/10665/64465/1/WHO_VSQ_97.01.pdf

Chen, M., Chen, C. C., & Sheldon, O. J. (2016). Relaxing moral reasoning to win: How

organizational identification relates to unethical pro-organizational behavior. The

Journal of applied psychology, 101(8), 1082-1096. Retrieved from

http://dx.doi.org/10.1037/apl0000111

Chiarini, A. (2011). Japanese total quality control, TQM: Deming's system of profound

knowledge, BPR, Lean and Six Sigma: comparison and discussion. International

Journal of Lean Six Sigma, 2(4), 332-355. Retrieved from

http://dx.doi.org/10.1108/20401461111189425

Cintron, R. (2015). Human factors analysis and classification system interrater reliability

for biopharmaceutical manufacturing investigations (Doctoral dissertation).

Retrieved from

120 http://scholarworks.waldenu.edu/cgi/viewcontent.cgi?article=1193&context=disse

rtations

Covey, S. R. (1989). The 7 habits of highly effective people. Simon & Shuster, USA.

ISBN 978-1-4516-3961-2

Dahlgaard-Park, S.M. (2011), The quality movement: where are you going? Total

Quality Management, 22(5), 493-516. Retrieved from

http://dx.doi.org/10.1080/14783363.2011.578481

Deming, W.E. (2000). Out of the crisis. Cambridge, MA: MIT Press. (Original work

published 1982). ISBN 0-262-54115-7

Deming, W. E. (2000). The new economics (2nd ed.). Cambridge, MA: MIT Press.

(Original work published 1994). ISBN 0-262-54116-5

Directorate-General for Health and Food Safety. (2012, June 28). EU guidelines for good

manufacturing practice for medicinal products for human and veterinary use:

Pharmaceutical quality system (Vol. 4, Ch. 1). Retrieved from

http://ec.europa.eu/health/files/eudralex/vol-4/vol4-chap1_2012-06_en.pdf

Evans, J. R., & Lindsay, W. M. (2014). Managing for quality and performance

excellence (9th international ed.). Mason, OH: South-Western Cengage Learning

(Original work published 2008). ISBN 1-285-09459-X

Gallup. (2015). Gallup Q12 employee engagement survey. Retrieved from

https://q12.gallup.com/public/en-us/Features

Hoyer, R. W., Hoyer, B. B., Crosby, P. B., & Deming, W. E. (2001). What is quality?

Quality Progress, 34(7), 52. Retrieved from

121 http://ingenieria.udea.edu.co/~cpatino/Gestion%20Proces%202/Que%20es%20ca

lidad%20-%20Clase%201.pdf

International Organization for Standardization. (2015). Quality management principles.

Retrieved from http://www.iso.org/iso/pub100080.pdf

Iedema, R., Jorm, C., Braithwaite, J., Travaglia, J., & Lum, M. (2006). A root cause

analysis of clinical error: Confronting the disjunction between formal rules and

situated clinical activity. Social Science & Medicine, 63(5), 1201-1212. Retrieved

from http://dx.doi.org/10.1016/j.socscimed.2006.03.035

Jayswal, A., Li, X., Zanwar, A., Lou, H., & Huang, Y. (2011). A sustainability root cause

analysis methodology and its application. Computers & Chemical Engineering,

35(12), 2786-2798. Retrieved from

http://dx.doi.org/10.1016/j.compchemeng.2011.05.004

Juran, J. M., & Godfrey, A. B. (Eds.). (1999). Juran's quality handbook (5th ed.). New

York, NY: McGraw-Hill. ISBN 0-07-034003-X. Retrieved from

http://www.academia.edu/download/38161019/juran.pdf

Korakianiti, E., & Rekkas, D. (2011). Statistical thinking and knowledge management for

quality-driven design and manufacturing in pharmaceuticals. Pharmaceutical

Research, 28, 1465-1479. Retrieved from http://dx.doi.org/10.1007/s11095-010-

0315-3

Koski, G., Tobin, M. F., & Whalen, M. (2014). The synergy of the whole: Building a

global system for clinical trials to accelerate medicines development. Clinical

122 Therapeutics, 36(10), 1356-1370. Retrieved from

http://dx.doi.org/10.1016/j.clinthera.2014.09.006

Kotter, J. P. (1996). Leading change. Boston, MA: Harvard Business School Press. ISBN

0-87584-747-1

Kozlowski, M.A. (2014). Analysis of FDA guidance on corrective and preventive actions

systems: Voice of customer perspective (Master’s thesis). Available from

ProQuest Dissertations and Theses . (UMI No. 1526319).

Kumar, V., & Pansari, A. (2014). The construct, measurement, and impact of employee

engagement: A marketing perspective. Customer Needs and Solutions, 1(1), 52-

67. Retrieved from http://dx.doi.org/10.1007/s40547-013-0006-4

Kuwahara, S. S. (2007). A history of the OOS problem. BioPharm International, 20(11),

42. Retrieved from http://www.biopharminternational.com/history-oos-problem

Lagrosen, Y., & Travis, F. T. (2015). Exploring the connection between quality

management and brain functioning. The TQM Journal, 27(5), 565-575. Retrieved

from http://dx.doi.org/10.1108/TQM-08-2013-0093

Maguad, B. A., & Krone, R. M. (2014). Managing for quality in higher education: A

systems perspective. ISBN 978-87-403-0205-9. Retrieved from

http://stritapiret.or.id/wp-content/uploads/2013/02/managing-for-quality-in-

higher-education.pdf

Marshall, M., Pronovost, P., & Dixon-Woods, M. (2013). Promotion of improvement as a

science. The Lancet, 381(9864), 419-421. Retrieved from

http://dx.doi.org/10.1016/S0140-6736(12)61850-9

123 Martínez-Tur, V., Peñarroja, V., Serrano, M. A., Hidalgo, V., Moliner, C., Salvador, A.,

... & Molina, A. (2014). Intergroup conflict and rational decision making. PLOS

ONE, 9(12): e114013. Retrieved from

http://dx.doi.org/10.1371/journal.pone.0114013

Meadows, D. H. (2008). Thinking in systems. In D. Wright (Ed.). White River Junction,

VT: Chelsea Green Publishing. ISNB 978-1-60358-055-7

Meridian Medical Technologies, Inc. (2012, March 5). 483 response. Retrieved from

ORA FOIA Electronic Reading Room at http://www.fda.gov/ucm/groups/fdagov-

public/@fdagov-afda-orgs/documents/document/ucm383892.pdf

Moodliar, S., Genis, E., Anelich, L., & Puren, A. (2014). The global need for quality.

Medical Technology SA, 27(2), 26-32. Retrieved from

http://mtsaj.co.za/index.php/mtsaj/article/viewFile/93/88

Naysmith, P. (2012, October 29). It’s easy to poke holes in something – Especially when

it already has holes. Retrieved from http://www.qualitydigest.com/inside/quality-

insider-column/it-s-easy-pick-holes-something.html

Neubert, J. C., Mainert, J., Kretzschmar, A., & Greiff, S. (2015). The assessment of 21st

century skills in industrial and organizational psychology: Complex and

collaborative problem solving. Industrial and Organizational Psychology, 8(02),

238-268. Retrieved from https://doi.org/10.1017/iop.2015.14

Nicolini, D., Waring, J., & Mengis, J. (2011). Policy and practice in the use of root cause

analysis to investigate clinical adverse events: Mind the gap. Social Science &

124 Medicine, 73(2), 217-225. Retrieved from

http://dx.doi.org/10.1016/j.socscimed.2011.05.010

Ono, R. (2013). Quality management systems success: Finding a balance between

quality tools and TQM soft skills (Master's thesis). Available from ProQuest

Dissertations and Theses database. (UMI 1524278)

Paciarotti, C., Mazzuto, G., & D'Ettorre, D. (2014). A revised FMEA application to the

quality control management. The International Journal of Quality & Reliability

Management, 31(7), 788. Retrieved from http://dx.doi.org/10.1108/IJQRM-02-

2013-0028

Panaggio, T. (2014). Standing in your own line and the quest for quality in teams. The

Journal for Quality and Participation, 37(3), 23. Retrieved from http://0-

search.proquest.com.torofind.csudh.edu/docview/1627999407

Parry, G., Mate, K., Perla, R., & Provost, L. (2013). Promotion of improvement as a

science [Peer commentary on the paper "Promotion of improvement as a science

by M. Marshall, P. Pronovost, and M. Dixon-Woods]. The Lancet, 381(9881),

1902-1903. Retrieved from

http://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736(13)61159-9.pdf

Pearson, T. A. (1999). Measurements and the knowledge revolution. Quality Progress,

32(9), 31. Retrieved from http://0-

search.proquest.com.torofind.csudh.edu/docview/214742383

125 Peerally, M. F., Carr, S., Waring, J., & Dixon-Woods, M. (2016). The problem with root

cause analysis. BMJ Quality & Safety, 0, 1-6. Retrieved from

http://dx.doi.org/10.1136/bmjqs-2016-005511

Perrigo Company plc. (2008, December 4). Untitled form 483 response. Retrieved from

ORA FOIA Electronic Reading Room at http://www.fda.gov/ucm/groups/fdagov-

public/@fdagov-afda-orgs/documents/document/ucm217881.pdf

Petit, F. (2015). Creating executive MBA program value through Deming's new

economics principles. Business Education & Accreditation, 17(2), 59-66.

Retrieved from http://www.theibfr.com/ARCHIVE/BEA-V7N2-

2015.pdf#page=61

Poska, R. (2009). Global regulatory viewpoint: Human error and retraining--An interview

with Kevin O'Donnell, Ph. D., Irish Medicines Board. Journal of CGMP

compliance, 13(4), 47. Retrieved from

http://www.ivtnetwork.com/sites/default/files/Human.pdf

Puvanasvaran, A. P., Jamibollah, N., & Norazlin, N. (2014). Integration of poka yoke

into process failure mode and effect analysis: A case study. American Journal of

Applied Sciences, 11(8), 1332. Retrieved from

http://dx.doi.org/10.3844/ajassp.2014.1332.1342

Puvathingal, B. J., & Hantula, D. A. (2012). Revisiting the psychology of intelligence

analysis: From rational actors to adaptive thinkers. American Psychologist, 67(3),

199. Retrieved from http://dx.doi.org/10.1037/a0026640

126 Ramiah, S. P., & Banks, A. P. (2015). Naturalistic decision making through

argumentation: Resolving labour disputes. Journal of Occupational and

Organizational Psychology, 88(2), 364-381. Retrieved from http://dx.doi.org/

10.1111/joop.12103

Reason, J. (1990). The contribution of latent human failures to the breakdown of complex

systems. Philosophical Transactions of the Royal Society B: Biological Sciences,

327(1241), 475-484. Retrieved from http://www.jstor.org/stable/55319

Reason, J. (1995). Understanding adverse events: human factors. Quality in Health Care,

4(2), 80-89. Retrieved from

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1055294/

Reason, J. (2000). Human error: Models and management. The BMJ, 30(7237), 768-770.

Retrieved from

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1117770/pdf/768.pdf

Reason, J. (2003). Human error. Cambridge, MASS: Cambridge University Press.

(Original work published 1990). ISBN 0-521-31419-4

Riggio, R. E., & Saggi, K. (2015). Incorporating “soft skills” into the collaborative

problem-solving equation. Industrial and Organizational Psychology, 8(02), 281-

284. Retrieved from https://doi.org/10.1017/iop.2015.34

Rim, S., Hansen, J., & Trope, Y. (2013). What happens why? Psychological distance and

focusing on causes versus consequences of events. Journal of Personality and

Social Psychology, 104(3), 457. Retrieved from https://doi.org/10.1037/a0031024

127 Rodchua, S. (2009). Comparative analysis of quality costs and organization sizes in the

manufacturing environment. The Quality Management Journal, 16(2), 34.

Retrieved from http://0-

search.proquest.com.torofind.csudh.edu/docview/213592434

Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the

attribution process. Advances in Experimental Social Psychology, 10, 173-220.

Retrieved from

http://web.mit.edu/curhan/www/docs/Articles/15341_Readings/Social_Cognition/

Ross_Intuitive_Psychologist_in_Adv_Experiment_Soc_Psych_vol10_p173.pdf

Roth, W. F. (2013). Six Sigma: Just more of the same? Performance Improvement, 52(2),

25-30. Retrieved from https://doi.org/10.1002/pfi.21326

Rother, M. (2010). Toyota kata: Managing people for improvement, adaptiveness and

superior results. New York, NY: McGraw-Hill Education. ISBN 0-07-163523-8

Schultz, J. (2013). Out in front. Quality Progress, 46(1), 18-23. Retrieved from http://0-

search.proquest.com.torofind.csudh.edu/docview/1282501032

Smith, A (2009). An inquiry into the nature and causes of the wealth of nations. In C.

Muir & Widger, D. (Eds.). Project Gutenberg. (Original work published in 1776).

Retrieved from

http://www.theindependentpatriot.com/Liberty%20Reading%20Group%20Docu

ments/1776_An_Inquiry_into_the_Nature_and_Causes_of_the_Wealth_of_Natio

ns_by_Adam_Smith.pdf

128 Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of

Economics, 69(1), 99-118. Retrieved from

http://www.econlib.org/library/Smith/smWN1.html

Singh, P. J., Dean, C. M., & Chee-Chuong, S. (2013). Deming management method:

Subjecting theory to moderating and contextual effects. The Quality Management

Journal, 20(3), 41. Retrieved from http://0-

search.proquest.com.torofind.csudh.edu/docview/1419408758

The W. Edwards Deming Institute. (2012). Deming the man-timeline. Retrieved from

https://deming.org/theman/timeline

The W. Edwards Deming Institute. (2016). The fourteen points for management.

Retrieved from https://www.deming.org/theman/theories/fourteenpoints

Trybou, J., De Caluwé, G., Verleye, K., Gemmel, P., & Annemans, L. (2015). The

impact of professional and organizational identification on the relationship

between hospital–physician exchange and customer-oriented behaviour of

physicians. Human Resources for Health, 13(1). Retrieved from

https://doi.org/10.1186/1478-4491-13-8

U.S. Food and Drug Administration. (1999, August). Guide to inspections of quality

systems. Retrieved from

http://www.fda.gov/ICECI/Inspections/InspectionGuides/ucm074918.htm

U.S. Food and Drug Administration. (2001) Guidance for industry Q7A: Good

manufacturing practice guidance for active pharmaceutical ingredients.

Retrieved from

129 http://www.fda.gov/ICECI/ComplianceManuals/CompliancePolicyGuidanceMan

ual/ucm200364.htm

U.S. Food and Drug Administration. (2005, May 12). [Untitled Warning Letter to J D

Heiskell & Co.] Retrieved from

https://web.archive.org/web/20170112203016/http://www.fda.gov/ICECI/Enforce

mentActions/WarningLetters/2005/ucm075405.htm

U.S. Food and Drug Administration. (2006). Guidance for industry investigating out-of-

specification (OOS) test results for pharmaceutical production. Retrieved from

http://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/

guidances/ucm070287.pdf

U.S. Food and Drug Administration. (2006). Guidance for industry Q9 quality risk

management. Retrieved from

http://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/

guidances/ucm073511.pdf

U.S. Food and Drug Administration. (2006). Guidance for industry quality systems

approach to pharmaceutical CGMP regulations. Retrieved from

http://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/

guidances/ucm070337.pdf

U.S. Food and Drug Administration. (2008, September 16). Untitled warning letter to

Ranbaxy Laboratories, Ltd. Retrieved from

https://web.archive.org/web/20170112063848/http://www.fda.gov/iceci/enforcem

entactions/warningletters/2008/ucm1048133.htm

130 U.S. Food and Drug Administration. (2009). Guidance for industry Q10 pharmaceutical

quality system. Retrieved from

http://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/

guidances/ucm073517.pdf

U.S. Food and Drug Administration. (2011). Guidance for industry process validation:

General principles and practices. Retrieved from

http://www.fda.gov/downloads/drugs/.../guidances/ucm070336.pdf

U.S. Food and Drug Administration. (2011, May 20). Untitled warning letter to

Aurobindo Pharma Limited. Retrieved from

https://web.archive.org/web/20170115051852/http://www.fda.gov:80/iceci/enforc

ementactions/warningletters/2011/ucm256861.htm

U.S. Food and Drug Administration. (2011, September 1). Untitled warning letter to

Lonza Biologics, Inc. Retrieved from

https://web.archive.org/web/20170112193228/http://www.fda.gov/iceci/enforcem

entactions/warningletters/2011/ucm271494.htm

U.S. Food and Drug Administration. (2015). Request for quality metrics guidance for

industry. (Draft). Retrieved from

https://web.archive.org/web/20150905211729/https://www.fda.gov/downloads/Dr

ugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM455957.pdf

U.S. Food and Drug Administration. (2016). Submission of quality metrics data guidance

for industry. (Draft). Retrieved from

131 http://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/

guidances/ucm455957.pdf

Vinnem, J. E., Hestad, J. A., Kvaløy, J. T., & Skogdalen, J. E. (2010). Analysis of root

causes of major hazard precursors (hydrocarbon leaks) in the Norwegian offshore

petroleum industry. & System Safety, 95(11), 1142-1153.

Retrieved from http://dx.doi.org/10.1016/j.ress.2010.06.020

Weber, C. (2013). Conversational capacity: The secret to building successful teams that

perform when the pressure is on. New York, NY: McGraw Hill Professional.

ISBN: 978-0-07180712-8

Wiengarten, F., Gimenez, C., Fynes, B., & Ferdows, K. (2015). Exploring the importance

of cultural collectivism on the efficacy of lean practices: Taking an organizational

and national perspective. International Journal of Operations & Production

Management, 35(3), 370-391. Retrieved from http://dx.doi.org/10.1108/IJOPM-

09-2012-0357

Wightkin, T. (2015). Determining the best practice for providing orientation to traveling

nurses in an inpatient setting (Master's thesis). Available from ProQuest

Dissertations and Theses database. (UMI 1526585)

World Health Organization. (2008). Annex 4 good manufacturing practices for

pharmaceutical products: Main principles. WHO Technical Report Series, No.

908. Retrieved from

http://apps.who.int/prequal/info_general/documents/TRS908/WHO_TRS_908-

Annex4.pdf

132 Yuniarto, H. (2012). The shortcomings of existing root cause analysis tools. Proc. World

Congr. Eng. 3. Retrieved from

http://www.iaeing.org/publications/WCE2012/WCE2012_pp1549-1552.pdf