Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

Volume 20, Number 2 - February 2004 to April 2004

A Statistical Comparison of Three Root Cause Analysis Tools By Dr. Anthony Mark Doggett

Peer-Refereed Article

KEYWORD SEARCH

Leadership Management Quality Research Sociology

The Official Electronic Publication of the National Association of Industrial Technology • www.nait.org © 2004

1 Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org A Statistical Comparison of Three Root Cause Analysis Tools By Dr. Anthony Mark Doggett

To solve a problem, one must first Spencer, 1998; Dettmer; 1997; Lepore recognize and understand what is & Cohen, 1999; Moran et al., 1990; Dr. Mark Doggett is a post-doctoral fellow and causing the problem. According to Robson, 1993; Scheinkopf, 1999; instructor at Colorado State University and is an adjunct faculty member at Aims Community Col- Wilson et al. (1993), a root cause is the Smith, 2000) lege. He is currently working on grants for the National Foundation in medical technol- most basic reason for an undesirable For example, Ishikawa (1982) ogy and the Department of Education in career condition or problem. If the real cause advocated the CED as a tool for and technical education. He also teaches courses in process control, leadership, and project man- of the problem is not identified, then one breaking down potential causes into agement. His research interests are decision-mak- is merely addressing the symptoms and more detailed categories so they can be ing and problem-solving strategies, technical management, theory of constraints, and opera- the problem will continue to exist. For organized and related into factors that tions system design. this reason, identifying and eliminating help identify the root cause. In contrast, root causes of problems is of utmost Mizuno (1979/1988) supported the ID importance (Dew, 1991; Sproull, 2001). as a tool to quantify the relationships Root cause analysis is the process of between factors and thereby classify identifying causal factors using a potential causal issues or drivers. structured approach with techniques Finally, Goldratt (1994) championed designed to provide a focus for identify- the CRT as a tool to find logical ing and resolving problems. Tools that interdependent chains of relationships assist groups and individuals in identify- between undesirable effects leading to ing the root causes of problems are the identification of the core cause. known as root cause analysis tools. A fundamental problem for these tools is that individuals and organiza- Purpose tions have little information to compare Three root cause analysis tools them to each other. The perception is have emerged from the literature as that one tool is as good as another tool. generic standards for identifying root While the literature was quite complete causes. They are the cause-and-effect on each tool as a stand-alone applica- diagram (CED), the interrelationship tion and their relationship with other diagram (ID), and the current reality methods, the literature tree (CRT). There is no shortage of is deficient on how these three tools information available about these tools. directly compare to each other. In fact, The literature provided detailed there are only two studies that com- descriptions, recommendations, and pared them and the comparisons were instructions for their construction and qualitative. Fredendall et al. (2002) use. The literature documented pro- compared the CED and the CRT using cesses and structured variations for previously published examples of their each tool. Furthermore, the literature is separate effectiveness while quite detailed in providing colorful and Pasquarella et al. (1997) compared all illustrative examples for each of the three tools using a one-group post-test tools so practitioners can quickly learn design with qualitative responses. and apply them. In summary, the There is little published research that literature confirmed that these three quantitatively measures and compares tools do, in fact, have the capacity to the CED, ID, and CRT. This study find root causes with varying degrees attempted to address those deficiencies. of accuracy, efficiency, and quality The purpose of this study was to (Anderson & Fagerhaug, 2000; Arcaro, compare the perceived differences 1997; Brown, 1994; Brassard, 1996; between the independent variables: the Brassard & Ritter, 1994; Cox &

2 Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org cause-and-effect diagram (CED), the participant groups and did not affect process. The activity of the facilitators interrelationship diagram (ID), and the the overall perceptions or results. Also, was intended to help control the current reality tree (CRT) with regard the sample problem scenarios used in potential diffusion of treatment across to , factor relationships, the study were considered as having the groups. usability, and participation. The first equal complexity. To ensure consistency, each dependent variable was the perceived treatment packet was similarly format- ability of the tool to find root causes Methodology ted with the steps for tool construction and the interdependencies between The specific design was a within- and a graphical example based on causes. The second dependent variable subjects single factor repeated mea- published material. Each treatment was the perceived ability of the tool to sures with three levels. The indepen- group also received the same supplies find relationships between factors or dent variables were counterbalanced as for constructing the tools. The depen- categories of factors. Factors may shown in Table 1, where T represents dent variables were measured using a include causes, effects, or both. The the treatment, M represents the mea- twelve-question self-report question- third dependent variable was the sure, and the group observations are naire with a five-point Likert scale and overall perception of the tool’s usabil- indicated by O. The rationale for this semantic differential phrases. ity to produce outputs that were design is that it compares the treat- logical, productive, and readable. The ments to each other in a relative Procedure fourth dependent variable was the fashion using the judgments of the Participants and facilitators were perception of participation resulting in participants. In this type of comparative randomly assigned to one of three constructive discussion or dialogue. In situation, each participant serves as his groups. The researcher provided addition, the secondary interests of the or her own control making the use of simultaneous instructions about the study were to determine the average independent groups unnecessary experiment, problem scenarios, and process times required to construct (Girden, 1992). The advantage of this materials. Five minutes were allowed each tool, the types of questions or design is that it required fewer partici- for the participants to review their statements raised by participants during pants while reducing the variability respective scenarios and instructions, and after the process, and the nature of among them, which decreased the error followed by a ten minute question the tool outputs. term and the possibility of making a period. The participants were then asked Type I error. The disadvantage was that to analyze and find the perceived root Delimitations, Assumptions, and it reduced the degrees of freedom cause of the problem. The facilitators Limitations (Anderson, 2001; Gliner & Morgan, were available for help throughout the The delimitations of the study were 2000; Gravetter & Wallnau, 1992; treatment. Upon completion of the that the tools were limited to the CED, Stevens, 1999). treatment, the participants completed the ID, and CRT while participants were self-report instrument. This process was limited to small groups representing an Measures and Instrument repeated until all groups applied all authentic application of use. The Three facilitators were trained in three analysis tools to three randomized limitations of the study were grounded the tools, processes, and procedures problems. Each subsequent treatment in the statistical requirements for the before the experiment. They were was administered every seven days. . The experimen- instructed to be available to answer tal results reflected the efficacy of the questions from the participants about Reliability and Validity tools in the given context. While the the tools, goals, purpose, methods, or Content validity of the instrument researcher attempted to control obvious instructions. The facilitators did not was deemed adequate by group of extraneous variables during the study, provide information about the problem graduate students familiar with the participant and organizational cultural scenarios. They were also trained in research. Cronbach’s alpha was .82 for attributes, politics, and social climate observational techniques and instructed a pilot study of 42 participants. The remained outside the scope and control to intervene in the treatment process if dependent variables were also congru- of the analysis. a group was having difficulty con- ent with an exploratory principle The assumptions of the study were structing the tool or managing their component analysis. that (a) root cause analysis techniques are useful in finding root causes, (b) the identification of a root cause will Table 1. Repeated Measures Design Model lead to a better solution than the identification of a symptom, and (c) the identification of causal interdependen- cies is important. In addition, expertise, aptitude, and prior knowledge about the tools, or lack thereof, were assumed to be randomly distributed within the

3 Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

Threats to internal validity were Table 2. Repeated Measures Analysis of Variance (ANOVA) for Individual Question maturation and carryover effects. Other Regarding Cause Categories threats included potential participant bias, statistical regression to the mean, outside influence, interruptions, and interaction between participants. An additional threat was attrition of participants due to fatigue, boredom, or time constraints. The most significant threat to external validity was the representative- Table 3. Significant Within-Group Differences for Usability Variable ness of the selected sample because of ecological components associated with qualitative research. The setting and context were congruent with a typical educational environment. Therefore, the assumption of generalizability would be based on a judgment about the similarity between the empirical design and an ideal situation in the field (Anderson, 2001). From a pure experimental standpoint, the external validity might be considered low, but from a represen- ANOVA is robust to violations of factor relationships. Therefore, the tative design standpoint, the validity was normality, it is not robust to violations null hypothesis (H0) for factor rela- high (Snow, 1974). of sphericity. For violations of spheric- tionships was retained. However, as ity, the researcher used the Huynh and shown in source Table 2, there was a Participants Feldt (1976) corrected estimates significant difference in response to an The participants were first and suggested by Girden (1992). All individual question on this variable second year undergraduate students in statistical analyses were performed regarding how well the tools identify three intact classroom sections of a using the Statistical Package for Social categories of causes (F (2, 74) = general education course on team ™(SPSS) software. 7.839, p = .001). Post hoc tests found problem solving and leadership. Each that the CED was perceived as group consisted of ten to thirteen Statistical Findings statistically better at identifying cause participants and was primarily white Screening indicated the data were categories than either the CRT (t (85) males. Females accounted for 11% of normally distributed and met all = 4.54, p < .001) or the ID (t (83) = the sample and minorities comprised assumptions for parametric statistical 2.81, p = .023) with medium effect less than 3%. The initial sample was analysis. After all responses, sizes of 0.59 and 0.47 respectively. 107 participants. The actual sample Cronbach’s alpha for the instrument Using a corrected estimate, a was 72 participants due to attrition and was .93. Descriptive statistical data for significant statistical difference was non-responses on the instrument. the dependent variables found that the also found for usability (F (1.881, 74) mean for the CED was either the same = 9.156, p < .001). Post hoc compari- Data Analysis or higher on all dependent variables sons showed that both the CED (t (85) A repeated-measures ANOVA with with standard deviations less than one. = 5.04, p < .001) and ID (t (81) = 2.37, a two-tailed .05 alpha level was For the individual questions on the p = .009) were perceived more usable selected for the study. For significant instrument, the mean for the CED was than the CRT with medium effects findings, post hoc t tests with higher on eight questions, while the sizes of 0.56 and 0.53, respectively. Bonferroni adjustments identified mean for the ID was higher on four. The overall results for usability are specific tool differences based on No statistical difference was found shown in source Table 3. Therefore, the conditions of sphericity (Field, 2000; among the three tools for causality or null hypothesis (H0) was rejected and Stevens, 1999) and calculated effect participation. Therefore, the null there is a significant difference be- sizes (d) (Cohen, 1988). As in other hypothesis (H0) was retained and there tween the CED, ID, and CRT with ANOVA designs, homogeneity of does not appear to be a difference in the regard to perceived usability. variance is important, but in repeated perceptions of the participants concern- The usability variable was the measures, each score is somewhat ing the ability of the tools to identify perception of the tool’s ease or diffi- correlated with the previous measure- causality or affect participation. culty, productive output, readability, ment. This correlation is known as No statistical difference was and assessment of integrity. This sphericity. While repeated-measures found among the three tools regarding variable was measured using four

4 Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org questions, to which participants Other Findings Researcher Observations perceived a significant difference on The researcher sorted participant three questions. The statistical results Process Times and Counts questions and facilitator observations for the individual questions about The mean times for the CED, ID, using key words in context to discover usability are shown in Table 4. and CRT were 26 minutes, 26 minutes, emergent themes or categories. This Using a corrected estimate, and 30 minutes, respectively. The CED sorting resulted in four categorical participants responded significantly to had the smallest standard deviation at types: process, construction, root the question regarding perceived ease 5.59 and the CRT had the largest cause, and group dynamics. Questions or difficulty of use (F (1.886, 71) = standard deviation at 9.47. The stan- raised during the development of the 38.395, p < .001). The post hoc dard deviation for the ID was 8.94. The CED were about the cause categories comparison revealed a large difference researcher also counted the number of and if multiple root causes were between the CRT and CED (t (81) = factors and arrows on each tool output. acceptable. Questions about the ID 8.95, p < .001, ES = 1.15) as well as On the average, both the CED and CRT were primarily about the direction of between the CRT and ID (t (77) = 5.99, used less factors and arrows than the the arrows. Questions about the CRT p < .001, ES = 1.18). Thus, the CRT is ID, but the CRT used a third less were varied, but generally about the perceived as being much more difficult arrows than either the CED or ID. tool process or construction. A com- than either the CED or ID. mon question from participants among Participants responded signifi- Open-Ended Participant Responses all three tools was whether their cantly to the question regarding Comments received were generally assignment was to find a root cause or perceived differences between the tools about the groups, the process, or the to determine a solution to the problem. for productive problem solving activity tools. Comments about the groups were Most facilitator observations were (F (2, 71) = 3.650, p = .028). Post hoc typically about group size or the about group dynamics or the overall tests found that the CED was perceived amount of participation. Comments process. Typical facilitator comments to be statistically better at facilitating about the process were typically concerned the degree of participation or productive problem solving activity complaints or comments and were leadership roles. Facilitator observations than the CRT (t (81) = 3.67, p = .010). either positive or negative depending were relatively consistent regardless of The result was a small-to-medium on the participant’s experience. Com- the tool used, except for the CRT. Those effect size of 0.38. ments about the CED and ID tools observations were different in that the Participants responded significantly were either favorable or ambiguous. construction observations were much to the question regarding the perceived Most of the comments about the CRT higher and there were no observations readability of the tools (F (2, 71) = concerned its degree of difficulty. concerning difficulty with root causes. 3.480, p = .033). Post hoc tests indicated the CED was significantly preferred over the CRT for overall readability (t Table 4. Repeated Measures Analysis of Variance (ANOVA) for Individual Questions (81) = 3.41, p = .021) with a small-to- Regarding Usability medium effect size of 0.39.

Summary of Statistical Findings The statistical significance of usability was primarily due to signifi- cant differences on three of the four individual questions. The large effect size between the CRT and the other tools in response to ease or difficulty of use was the dominant factor. Thus, the CRT was deemed more difficult than the other tools. The other significant statistical finding was that the CED was perceived better at identifying cause categories than either the ID or CRT. However, this finding did not change participant’s perceptions of factor relationships. Post hoc compari- sons for all significant findings are shown in Table 5.

5 Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

The tool outputs were also qualita- Table 5. Post Hoc T-Tests with Bonferroni Adjustment tively evaluated for technical accuracy, if a root cause was identified, and the integrity of the root cause. The techni- cal accuracy of the tool was evaluated based on the (a) directionality of the cause and effect relationships; (b) specificity of the factors; (c) format, and (d) use of conventions. Root cause was evaluated by whether the group, through some means, visual or verbal, identified a root cause during the treatment. Groups that were unable to identify a root cause either disagreed about the root cause or indicated that a root cause could not be found. The integrity of the root cause was evalu- ated based upon specificity and reasonableness. Specificity was defined as something that could be specifically acted upon, whereas reasonableness was defined such that someone not familiar with the problem would state that the root cause seemed rational or within the bounds of common sense.

Summary of Other Findings A summary of the questions, observations, and tool output evalua- tions is shown in Table 6. The technical accuracy of both the CED and the ID were high, whereas the technical accuracy of the CRT was mixed. Groups using the CED were seldom able to identify a specific root cause while the groups using the ID did better. Because groups using the CED could not identify a specific root cause, the integrity of their selections also suffered. Only one CED group found a duced the greatest amount of questions tion. Rather, the study suggested that root cause that was both specific and and discussion about root causes, the ID is another easy alternative for reasonable. While the ID groups found whereas the CRT produced the greatest root cause analysis. The data also a root cause most of the time, the amount of questions and discussion support that the ID leads groups to integrity of their root causes was about process and construction. specific root causes in about the same mixed. In contrast, CRT groups found a amount time as the CED. root cause most of the time with high Discussion The results neither supported nor integrity in over half the cases. In The CED was specifically per- refuted the value of the CRT other than addition, the CRT groups were able to ceived by participants as better than the to verify that it is, in fact, a more do this using less factors and arrows as CRT at identifying cause categories, complex method. Considering the the other tools. facilitating productive problem solving effect sizes, participants defined The number of root cause questions activity, being easier to use, and more usability as ease of use. Thus, the CRT and observations for the CED was twice readable. The ID was easier to use, but was perceived by participants as too as high as the ID and ten times greater was no different from any of the other complex. However, usability was not than the CRT. Conversely, the CRT tools in other aspects, except cause related to finding root causes. Authors generated more process and construc- categories. Concurrently, none of the and scholars can reasonably argue that tion questions/observations than the ID tools was perceived as being signifi- a basic requirement for a root cause or CED. In summary, the CED pro- cantly better for causality or participa- analysis is the identification of root

6 Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org causes. Until this can be statistically characterized by an absence of con- dominated or led the process. When verified, one cannot definitively claim flict” (p. 249). Although the majority of leadership took place, individual the superiority of one tool over another. the CRT groups were uncomfortable contributions were carefully considered It is postulated that the difference during the process, the quality of their with a mix of discussion and inquiry. in average process time was due to the outputs was better. Group leaders did not attempt to construction complexity of the CRT. Third, groups were (a) learning the convince others of their superior Also, the variability of the ID improved tools for the first time, (b) emotionally expertise and conflicts were not consid- over time while the variability of the involved in the process, and (c) engag- ered threatening. In contrast, groups that CRT deteriorated. If any conclusion ing in what Scholtes (1988) called the were dominated did not encourage can be reached from this, it is perhaps “rush to accomplishment.” Because discussion and differences of opinion that as groups gain experience, the many participants were learning and were viewed as disruptive. An experi- learning curve for the ID increases. doing, they did not have time to assess enced facilitator could encourage group Conversely, experience may be detri- or reflect on the meaning of their members to raise difficult and poten- mental to the CRT because it uses a outputs. Their reflection was impaired tially conflicting viewpoints so the best different logic system that starts with by the emotionally-laden group ideas would emerge. effects and moves to causes, whereas processes. In addition, participants These tools can be used to their the ID starts with potential causes. were instructed to work on the problem greatest potential with repeated Specific observations indicated that until they had identified a root cause, practice. Like other developed skills, more extensive facilitator intervention which, in some groups, was manifested the more groups use the tools, the was required in the CRT process than by the need to achieve that goal as better they become with them. For on the other tools. Facilitators had to quickly as possible. many participants in this study, this intervene more often because the was the first time they had used a root groups had difficulty building the CRT Implications for Policy and cause analysis tool. Indeed, for some, it without making construction errors or Practice was their first experience with any process mistakes. Most interestingly, in The type and amount of training structured group decision-making spite of difficulties, groups using the needed for each tool varies. The CED method. Their experience and partici- CRT were able to identify specific and and ID can be used with little formal pation created insights that could be reasonable root causes over half the training, but the CRT requires compre- transferred to other analysis activities. time as compared to the CED and ID. hensive instruction because of its logic This was one of the distinguishing system and complexity. However, the Conclusion findings of the study. CED and ID both appear to need some The intent of this research was to With the ID, groups were able to type of supplemental instruction in be able to identify the best tool for root identify root causes, but only because critical and decision making cause analysis. This study was not able ID construction requires a count of methods. The CRT incorporates the to identify that tool. However, knowl- incoming and outgoing arrows. By evaluation system, but the CED and ID edge was gained about the characteris- simply looking at the factor with the have no such mechanism and are tics that make certain tools better under most outgoing arrows, groups declared highly dependent on the thoroughness certain conditions. Using these tools, that a root cause had been found and of the group using them. finding root causes seems to be related stopped any further any critical analysis. Serious practitioners should to how effectively groups can work Their certainty undermined critical consider using facilitators during root together to test assumptions. The analysis. Khaimovich (1999) encoun- cause analysis. Observations found that challenge of this study was how to tered similar results in a behavioral most groups had individuals who capture and test what people say about study of problem solving teams where participants were reluctant to scrutinize the validity of their models and re- Table 6. Summary of Questions, Observations, and Tool Outputs mained content with their original ideas. So why were participants unable to recognize the difference between the three tools with respect to the integrity of their root causes? First, because the participants were asked only to identify a root cause, not a root cause that was reasonable and specific enough to be acted upon. Second, most of the groups avoided creating the tension that might have produced better results. To quote Senge (1990), “Great teams are not

7 Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org the tools versus how the tools actually perform. Perhaps the answer lies in the Example of a Cause-and-Effect Diagram interaction between the participants and the tool. Root cause analysis may Methods People be valuable because it has the potential Too little Poor reward of developing new ways of thinking. system responsibility Wrong person in the job References Incorrect training Anderson, B., & Fagerhaug, T. (2000). Poor budgeting Inadequate Little positive Root cause analysis: Simplified training feedback tools and techniques. Milwaukee: ASQ Quality Press. Poor training High Anderson, N. H. (2001). Empirical absenteeism direction in design and analysis. Low trust Difficult to Mahwah, NJ: Erlbaum. operate Too much Arcaro, J. S. (1997). TQM overtime Poor maintenance Facilitator’s Guide. Boca Raton, Low morale FL: St. Lucie Press. Maintenance Internal Not enough Brassard, M., & Ritter, D. (1994). The competition equipment Preventive maintenance memory jogger II: A pocket guide of not done on schedule tools for continuous improvement Environment Machinery and effective planning. Salem, NH: GOAL/QPC. Brassard, M. (1996). The memory jogger plus+: Featuring the seven management and planning tools. Salem, NH: Goal/QPC. Example of an Interrelationship Diagram Brown, J. I. (1994). Root-cause analysis in commercial aircraft final assembly. (Master’s project, Califor- IN OUT IN OUT nia State University Dominguez 0 1 1 0 Hills). Master’s Abstracts Interna- tional, 33 (6): 1901. Human Data entry is Cohen, J. (1988). Statistical power and Resources complex analysis for the behavioral sciences IN OUT policies & procedures (2nd ed.). Hillsdale, NJ: Lawrence 2 0 Erlbaum Associates. Cox, J. F., III, & Spencer, M. S. (1998). Employee The constraints management turnover is handbook. Boca Raton, FL: St. high Lucie Press. Dettmer, H. W. (1997). Goldratt’s IN OUT theory of constraints. Milwaukee: 1 0 ASQC Press. Dew, J. R. (1991). In search of the Labels fall off root cause. Quality Progress, 24 packages IN OUT (3): 97-107. 0 2 Field, A. (2000). Discovering statis- tics using SPSS for Windows. Shipping London: Sage. IN OUT policies & Fredendall, L D., Patterson, J. W., 0 1 procedures Lenhartz, C., & Mitchell, B. C. Newest (2002). What should be changed? employees Quality Progress, 35 (1): 50-59. go to shipping Girden, E. R. (1992). ANOVA repeated measures. Newbury Park, CA: Sage.

8 Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

Gliner, J. A., & Morgan, G. A., (2000). Robson, M. (1993). Problem solving in Snow, R. E. (1974). Designs for Research methods in applied groups. (2nd ed.). Brookfield, VT: research on teaching. Review of settings: An integrated approach to Gower Educational Research, 44 (3): 265- design and analysis. Mahwah, NJ: Scheinkopf, L. J. (1999). Thinking for 291. Erlbaum. a change: Putting the TOC thinking Sproull, B. (2001). Process problem Goldratt, E. M. (1994). It’s not luck. processes to use. Boca Raton, FL: solving: A guide for maintenance Great Barrington, MA: North River St. Lucie Press. and operations teams. Portland: Press. Scholtes, P. (1988). The team hand- Productivity Press. Gravetter, F. J., & Wallnau, L. B. book: How to use teams to improve Stevens, J. P. (1999). Intermediate (1992). Statistics for the behavioral quality. Madison, WI: Joiner. statistics: A modern approach. sciences: A first course for students Senge, P. (1990). The fifth discipline. Mahwah, NJ: Erlbaum. of psychology and education (3rd New York: Doubleday. Wilson, P. F., Dell, L. D., & Anderson, ed.) St Paul, MN: West Publishing Smith, D. (2000). The measurement G. F. (1993). Root cause analysis: A Co. nightmare: How the theory of tool for total quality management. Huynh, H., & Feldt, L. (1976). Estima- constraints can resolve conflicting Milwaukee: ASQC Quality Press. tion of the Box correction for strategies, policies, and measures. degrees of freedom from sample Boca Raton, FL: St. Lucie Press. data in the randomized block and split plot designs. Journal of Educational Statistics, 1 (1): 69-82. Example of a Current Reality Tree Ishikawa, K. (1982). Guide to (2nd rev. ed.). Tokyo: Asian Operators do not Productivity Organization. use Khaimovich, L. (1999). Toward a truly SOPs dynamic theory of problem-solving group effectiveness: Cognitive and Operators view emotional processes during the root SOPs as a tool for Company does not inexperienced and cause analysis performed by a enforce the use re- incompetent operators of SOPs team. (Dissertation, University of Pittsburgh). Dissertation Abstracts International, 60 (04B): 1915. Lepore, D., & Cohen, O. (1999). Competent and Operators want to Some SOPs are incorrect Deming and Goldratt: The theory of experienced be viewed as operators experienced and constraints and the system of do not need SOPs competent profound knowledge. Great Barrington, MA: North River Press. Mizuno, S. (Ed.). (1988). Management Management for quality improvement: The seven expects Some operations SOPs are not new QC tools. Cambridge: Produc- operators to be do not have SOPs updated regularly tivity Press. (Original work pub- competent lished in 1979). Moran, J. W., Talbot, R. P., & Benson, Most The company does R. M. (1990). A guide to graphical operators are not have a defined competent system for creating problem-solving processes. Milwau- and updating SOPs kee: ASQC Quality Press. Pasquarella, M., Mitchell, B., & Most operators have Suerken, K. (1997). A comparison Competency comes 5-10 years of Standardization of on thinking processes and total through experience experience processes is not a quality management tools. In 1997 Company value APICS constraints management proceedings: Make common sense a common practice, Denver, CO, April 17-18, 1997. Falls Church, VA: APICS. 59-65.

9