Appendix E: Quality Improvement Training Initiatives with Suppliers

Total Page:16

File Type:pdf, Size:1020Kb

Appendix E: Quality Improvement Training Initiatives with Suppliers

APPENDIX E: QUALITY IMPROVEMENT TRAINING INITIATIVES WITH SUPPLIERS

This appendix is intended for students that have not been exposed to various aspects of quality management. It also serves as a review. Several of the cases included with this text require calculations presented in this appendix.

Quality improvement training, such as training provided during supplier development efforts, includes a variety of activities that support better supplier quality. Purchasing managers must become familiar with the important components of quality management. The four areas of quality training that purchasing companies often emphasize with suppliers include

I. Total quality management improvement training

II. Statistical quality control techniques training

III. Design of experiments training

IV. Problem-solving training

I. TOTAL QUALITY MANAGEMENT TRAINING

Total quality management (TQM) consists of “organized continuous improvement activities involving everyone in an organization in a totally integrated effort toward improving all products, processes and services, with the ultimate goal of error-free work and 100% customer satisfaction.”i Larger suppliers usually have the resources and time to commit to TQM training, but many smaller companies do not. As a result, some buying companies, such as Caterpillar, actively promote TQM and quality-related training with suppliers. The purpose of TQM training is to change an organizational culture from one that accepts less than perfect quality to one that recognizes the importance of satisfying customer requirements 100% of the time.

In tracking a supplier’s continuous improvement efforts, research has shown that organizations typically proceed through four stages as they establish TQM initiatives over a ten-year period. ii In the early stages of TQM program development, suppliers begin in the Awareness stage. This level is characterized by an acknowledgment of the importance of quality, yet employees are often confused and have a lack of commitment to process change. Process Mapping and Process Ownership are successive stages of development. Once the business mission is defined, the issues and processes that have the greatest impact on achieving the mission are identified. The process owner defines the boundaries and forms the improvement team, which uses process mapping to develop a common understanding of the process.

Process mapping involves the identification of the sequence of activities that occur when a customer order is processed. The final stage, Quality Culture, represents the furthest point along the spectrum, in which the principles of TQM are fully integrated into daily decisions throughout every functional activity in the product and process value chain.

To further our understanding of TQM, we will discuss a popular approach to TQM education that many U.S. companies follow—the Crosby approach to quality improvement through defect prevention.iii

This is certainly not the onely approach to TQM. Other approaches include those developed by Joseph

Juran and W.E. Deming. The Crosby approach to total quality management and continuous improvement involves the four absolutes of quality, the measurement of nonconformance, and process proving/ and control.

FOUR ABSOLUTES OF QUALITY Central to Crosby’s vision of TQM are the four absolutes of quality management: 1) Conformance to customer requirements, 2) Error prevention, 3) Zero defects, and 4) Non- conformance cost. These absolutes form the basis for creating a mindset that guides quality improvement efforts and processes. Exhibit 9.11 compares Crosby’s four absolutes of quality management to a traditional perception of quality.

Defining quality as conformance to customer requirements makes quality very specific. Quality is no longer based on a sense of “goodness” that may have no real relationship to what a customer actually requires. The only system that results in quality is prevention of errors. Appraisal requires locating errors

(usually through inspection), evaluating the errors, and disposing of and correcting errors after a procedure is complete. Appraisal is usually an expensive and wasteful approach to quality management. Prevention, on the other hand, requires the design of systems to prevent errors from occurring in the first place.

Zero defects means doing it right the first time. Total quality management does not accept the traditional view of acceptable quality levels or “that’s close enough.” In the past, if companies had 98% defect-free products, it was believed that quality was good enough. However, many Japanese and American companies have now improved quality to such an extent using a “six sigma” system that the number of defects produced is as little as 3 parts per million! Finally, quantifying the cost of poor quality in terms of the cost of nonconformance allows quality to become a clearly understood measure within an organization.

It also supports identifying the elements (and their cost) of not doing a job right the first time.

MEASUREMENT OF NONCONFORMANCE The measurement of quality nonconformance is central to the quality improvement process, and was originally pioneered by Joseph Juran, perhaps the greatest quality expert of the twentieth century. The chairman of a major corporation maintains that his company is trying to make quality as important in the measurement of general management as market share, profit, and cash management. He says there is so much that is abstract about quality that unless you can put it in profit and loss terms, it is not as motivating.

Calculating the price of nonconformance provides visibility to current and potential quality-related problems, and permits objective evaluation and corrective action where required. Measuring the cost of poor quality (i.e., the price of nonconformance) in terms of dollars allows a firm to

• Identify the total cost of quality associated with a specific nonconformance

• Track quality improvement in real terms

• Identify the many adverse effects of poor quality on internal and external

customers

• Draw management’s attention to the need for quality improvement

• Identify areas where corrective action will be most beneficial

The ability to break down and cost specific elements of nonconforming quality is critical to the quality improvement process.

PROCESS PROVING AND CONTROL Process proving includes the procedures used to demonstrate the capability of a process to produce a family of parts before the process is fully operational. A production process is an example of a system that requires proving before actual use. Other examples include order entry systems, accounts payable systems, or inventory control systems. Process-proving techniques include pilot tests, trial runs, or experiments designed to evaluate process capability. Regardless of the process- proving technique used, all techniques have several common features. Process proving involves using: • the same equipment and facilities in a trial run before the actual operation

• the sdame skills and knowledge in performing the process on a day-to-day basis

• the same procedures on a day-to-day basis

• the same suppliers on a day-to-day basis

• An objective (i.e., quantified) set of tests to determine if the output from a process conforms to

established statistical parameters

• Statistical analysis of data from the output to assess the overall capability of the process. This normally

involves calculating a process capability index or ratio, which is discussed in the next section.

Process control, on the other hand, involves maintaining and measuring a proven process. Process control measurements verify if a process is producing within previously established boundaries. The four requirements for using process control techniques include (1) a timely method of measuring output, (2) a standard to compare the measure, (3) a method of regulation and correction if the measurement detects a nonconformance, and (4) a process that has already been proven to be capable.

II. STATITICAL QUALITY CONTROL TECHNIQUES

Another quality training area involves the use of statistical quality techniques. These tools provide evidence whether given processes are capable or are performing within an acceptable range. Statistical techniques cannot specifically identify the source of the problem—this is done through problem-solving processes, such as root cause analysis, brainstorming, and process flow studies. Once the proposed corrective action is carried out, statistical techniques can also indicate whether the corrective action successfully eliminated the root cause of the problem. Two common statistical quality techniques include

• Process Capability Studies (Cpk)

• Statistical Process Control Charts (SPC)

PROCESS CAPABILITYIV

Before using statistical process control charts, we must verify that a process is capable of producing according to requirements or specifications. Process capability measures consider the spread or variation within a process and relate it to a desired design specification. Purchasers should never accept products from suppliers who do not know the capability of their processes.

Several terms will help us to further understand how a process can be improved. First, the target value of a product’s design is the tolerance for the critical part that product engineering determines is acceptable.

In most cases, the specification is given as an upper and lower specification limit. The second term that is relevant is the standard deviation of the supplier’s process. This measure provides an indication of the dispersion of the process. The dispersion of the process will always vary. For example, we can never realistically expect a machine operated by any person to produce a part with a perfectly constant set of measures. There will always be some deviation or dispersion in the output of a process. In general, deviations from a mean will occur because of the Four M’s:

1. Man—Variations attributable to lack of training, human error, or unfamiliarity with a process

2. Machine—Variations occurring due to worn parts or improper maintenance

3. Methods—Variations occurring due to different procedures or setups

4. Materials—Variations occurring due to natural deviations in material inputs (supplie quality affects

this greatly)

The key to determining process capability is the Cp measure or ratio. This is defined as

Cp = |USL – LSL| 6 

The process width (the denominator of the ratio) is identified as 6 times the standard deviation of the process, which is calculated using output directly from the process. In the case of unilateral (i.e., one- sided) specifications with a process average of , the Cp equation is

Cp = |USL –  | 3  or

Cp = |  – LSL| 3  The upper specification limit (USL) and lower specification limit (LSL) refer to the specifications provided by engineering on a particular critical characteristic. The Cp ratio should be at least 2.0 or higher.

A value of 2.0 indicates that the specification width is twice that of the distribution of the output from a process. However, The Cp measure often does not tell the full story. Non-centering increases the probability that a process will produce nonconforming parts. What is non-centering? If a design engineer specifies that a part should be produced at a design target of 10 inches, and parts taken from the process measure an average of 10.02 inches, we say that the process is off-centered or non-centered by .02 inches on average.

A second process capability calculation, Cpk, takes into account process variability and non-centering.

If a process produces exactly to the design center target, then Cp = Cpk. Any off-centering that occurs, however, results in Cpk being less than Cp. Cpk can never be larger than Cp. Cpk is an ideal process capability

12 measure because it takes into account both spread (i.e., process width) and process non-centering. Cpk is calculated as

Cpk = Cp(1 – k) where

k = | (D –  | (USL – LSL)/2

D is the design center target,  is the process average, and USL and LSL are the upper and lower specification limits, respectively. The value of Cpk should be 1.50 at a minimum. Note that when the design center and process mean are the same, Cp and Cpk are equal.

Process capability assumes that the process width is within the design width, with both means being very close. It is also possible that a process is capable but the mean of the process is off, pushing one tail of the process distribution outside the design specification. In this situation, the process is capable but out of control, and both tails of the process specification do not have to be wider than the design specification.

There are three primary ways to increase the process capability index (Cpk):

1. Widen the design specification Widening the design specification changes the numerator of the Cp

equation. Relaxing the design specification will result in a higher Cp calculation by making the numerator of the equation relatively larger than the denominator. Engineers should never widen the

specification simply to increase the process capability calculation.

2. Narrow the process width This is the most desirable way to increase the process capability index. This

requires the reduction or elimination of process variability, which results in a smaller value in the

denominator of the Cpk calculation. Because variance is the enemy of consistent quality, variance

reduction is the best way to improve process capability.

3. Center the process Because off-centering of a process lowers process capability, centering a process

will result in a smaller noncentering correction factor (k). A smaller k factor adjustment means that the

Cpk calculation will be higher.

It is also possible to affect the Cpk calculation by employing a combination of these three methods.

PROCESS CAPABILITY (Cp) EXAMPLE Suppose that engineering establishes that the length of a stamped sheet-metal part must have a design specification width of 12 +/– .02 inches. Parts sampled from a process that produces the stamped part have a standard deviation of .007. What is the process capability

(Cp) of this process?

Cp = .04 (6 x .007) = .95

The Cp calculation reveals that the process is not capable of producing parts that will consistently meet the design specification of +/– .02 inches or .04 inches. The Cp of .95 is well below the initial target level of

1.33. Variability within the process is wider than the design specification.

After analysis and adjustment, an operator measures another batch of parts and finds the sample has a standard deviation of .004. What is the new process capability for this process? (Note that the design specification width, or the numerator, has not changed.)

Cp = .04 (6 x .004) = 1.67

The process is now highly capable of producing parts that meet the design specification requirement.

Process variance, reflected by the standard deviation measurement, is much narrower now compared to the design specification width. PROCESS CAPABILITY (Cpk) EXAMPLE A plastic injected molding machine should produce parts with a design specification (as set by product engineers) of 6.00 +/– .05 inches. A study reveals that the process actually produces parts with an average measurement of 6.02 inches with a sample standard deviation of .01 inches. What are the Cp and Cpk of this process?

Solution: First, calculate Cp (the design specification width divided by the process width):

Cp = .10 (6)(.01) = 1.67

Second, calculate k, the adjustment factor for off-centering:

k = 6.00 – 6.02 (.10/2) = .02/(.05) = .4

Third, use the k correction factor to calculate Cpk:

Cpk = (1 – .4)(1.67) = 1.002

This example shows why the Cp calculation alone is not enough to demonstrate process capability.

Although the Cp value of 1.67 indicates a highly capable process, the adjustment due to process off- centering reduces Cpk to a less-than-acceptable level. In this case, there is a need to center the process as close as possible to the design center.

STATISTICAL PROCESS CONTROL CHARTING (SPC)

Many buying companies require that suppliers have the ability to monitor quality levels throughout a production process. This requirement supports the use of statistical process control (SPC) charting. SPC is a maintenance tool that helps identify if a process is conforming to preestablished statistical levels. The process is also useful for detecting shifts in a process that may lead to nonconforming output.

SPC requires an operator to measure periodically a small batch of parts against some product or process attribute and then compare the sample to preestablished control limits. Sampling usually consists of four or five pieces sampled randomly from a batch of parts. The exact size of the sample is statistically derived, and depends on the size of the population and the level of risk the buying company is willing to assume. The upper and lower control limits, established statistically, define when a process is no longer producing acceptable output. Output that falls within the upper and lower control limits is “in control” while output outside the control limits indicates the process is “out of control.” An operator immediately halts an out-of-control process to identify and correct the cause(s) of the problem.

We normally use two charts. The first, called the X chart, requires the average of the parts taken within a sample. The second, called the R chart (range) is the mathematical difference between the high and low measures within a sample. Note that a point outside the upper or lower control limit only indicates the possibility that the process is out of control. An organization must determine whether the cause of this outlier is special (i.e., a unique or random occurrence), or common (i.e., a systematic problem). Even though all points may fall within the upper and lower control limits, this does not necessarily mean that the process is in control. An operator may stop a process if a trend line suggests a process is demonstrating nonrandom behavior.

The key features of statistical process control charting include

• Measuring a specific quality attribute (usually by an operator) while a product is being produced

• Using this approach only after determining process capability through the Cpk calculation

• Providing timely information concerning currently produced items. Is the process meeting

requirements? Are there shifts in the production process that signal future problems?

• Allowing an operator to take corrective action when charting indicates a problem.

STEPS FOR DEVELOPING AN SPC CONTROL CHART Assuming the data follow a normal distribution, how do we determine the control limits when establishing statistical process control charts?v

The following steps will help guide the development of statistical process control charts:

1. Select the key product attribute(s) or parameter(s) to measure and control. Key parameters or

attributes are those that are most important to the performance or appearance of an item.

2. Sample periodically from the production process. A commonly accepted practice is to take

samples of four to five units. Sampling occurs between specified points in time. For example, an

operator may take a sample every half-hour.

3. Calculate the average (X ) of each subgroup and the range within each subgroup or sample. The

range (R) is the difference between the highest and lowest of the measures within a subgroup or

sample. 4. Take approximately 25 to 30 samples of four to five units each. Statistical evidence indicates that

establishing the X and R control charts requires 25 to 30 samples.

5. After taking the required samples, calculate the average of all subgroup averages (the grand

average or X ) and calculate the average range (R bar) using the ranges from each subgroup or sample.

6. Calculate the upper and lower control limits for the X chart using the formulas provided in Exhibit

9.12.

7. Calculate the upper and lower control limits for the R chart using the formulas provided in Exhibit

9.12.

8. Draw the upper and lower control limits on the X and R charts.

9. Begin charting measures by taking samples periodically. Initially, an operator takes samples on a

frequent basis. Once a process is stable, samples usually occur less often. If one or more sample

points fall outside the upper or lower control limits on the X or R chart, investigate the cause(s) of

the process variance.

Statistical control requires an X chart (average measure within a sample) and an R chart (total range within a sample). Why is the range within each sample important? Consider a process that has a design center of

10 inches. Assume an SPC sample provides measures of 9.5, 9.5, 10.5, and 10.5 inches. The average of the measures within this sample is 10 inches. The sample average makes it appear that the process is in perfect alignment! The range, however, indicates a wide dispersion of measures within the sample. In this case, the range is one inch (the difference between the highest and lowest measure within the sample). This range would surely fall outside the control limits of the R chart. Although the average of the sample is acceptable, the range is not. The range calculation reveals that a quality control problem exists that requires investigation and correction. An operator must use the two charts together to establish whether a process is truly in control.

SPC EXAMPLE Exhibit 9.13 provides sample data for establishing a control chart for a specific product attribute. Exhibit 9.12 provides the formulas and factors required for determining the 3-sigma (standard deviation) control limits for X and R charts. Based on the calculations, the upper and lower control limits for the X chart (from Exhibit 9.12) are 7.031 and 6.971. The calculated upper and lower control limits for the R chart are 0 and .1099. Once charting begins, any samples that measure outside these limits require investigation with a possible correction to the production process. Furthermore, samples that demonstrate a nonrandom pattern, even if they remain within the upper and lower limits, might also indicate a potential problem.

III. DESIGN OF EXPERIMENTS

Statistical control techniques only identify whether a process is out of control or in control. Control techniques do not reduce the sources of variation that cause a process to operate beyond accepted levels.

Simply stated, measurement does not equate directly with quality improvement or variance reduction. As a result, firms require an approach that allows the systematic identification and reduction of variation within a process. This approach is called design of experiments.

Design of experiments recognizes that quality improvement results from eliminating the sources of variation that contribute to inconsistent quality. This approach focuses on various product/process combinations that are capable of creating consistent uniformity at the lowest total cost. Because it is impossible to focus on all combinations, users evaluate only a few experimental conditions that cover the range of product/process outcomes. While the details of the design and analysis of experiments are too complicated for the limited coverage presented here, it is important to understand the basic steps of design of experiments.vi

• Identify the important variables within a process. These variables may be product or process

parameters, materials or components from suppliers, or even measuring equipment factors. Important

variables are those that contribute variance or variability.

• Separate these variables into a maximum of four key variables. It is difficult to conduct controlled

experiments with more than three or four critical variables.

• Reduce the variation on these critical variables through redesign, supplier process improvement,

equipment improvement, etc.

• Widen the design specification or tolerances, wherever possible, on less critical variables to reduce

costs. This will also increase attention on the most important variables.

Design of experiments is a systematic approach for identifying and eliminating sources of variation within a process. (Recall that variation is measured by the standard deviation of the output, which is the denominator of the Cpk calculation). The objective of design of experiments is to identify and reduce the major causes of variance that affect product quality and consistency. For example, inconsistent materials and component inputs received from suppliers, inadequate specifications provided to suppliers, too stringent product or process specifications, and equipment that produces inconsistencies due to wear or age can all lead to inconsistency or process variance.

IV. PROBLEM-SOLVING TECHNIQUES TRAINING

A supplier with an established statistical process control (SPC) system that has not developed problem- solving skills will likely lack the ability to improve quality continuously. When a problem is encountered, the supplier’s personnel must be able to systematically study the problem, identify the root causes, take corrective action, and ensure that the problem does not occur again. This process is known as problem solving. The problem-solving process was first introduced by W.E. Deming, an American quality expert who worked for many years with Japanese engineers following World War II.vii Dr. Deming’s methods became so well known in Japan that the major quality award in the country was named the Deming Prize.

Deming’s approach to problem solving was as follows:

. Plan Determine the problem and identify the root cause. In many cases, this may involve having to

ask the question “Why?” at least five times before the true root cause is identified.

. Do Implement a corrective action to solve the problem

. Check Check through measurement to verify that the corrective action solved the problem

. Act If it did work, standardize the corrective action into the task.

These series of steps became known as the Plan—Do—Check—Act cycle, a never-ending cycle of continuous improvement. Once the major sources of a problem are eliminated, the next set of problems or sources of variance is attacked. This cycle embodies the Japanese kaizen philosophy of continuous improvement, which states that a problem is never fully solved; it only becomes easier to manage.

Some suppliers may have no experience with formal problem-solving techniques. In this case, the buying firm has an opportunity to train suppliers in how to identify and eliminate quality-related problems.

Many larger companies have developed a structured approach for problem identification and elimination, particularly through the use of teams. Xerox developed one approach to problem solving. This approach is typical of many group problem- solving techniques:

Step 1 Identify and select a problem.

Critical to this step is the ability to agree upon the problem that requires solving. A company that is just beginning the quality improvement process usually has a large choice of problems from which to choose. Usually, a group should identify and select problems having the most serious impact on quality.

The measurement of nonconformance discussed earlier helps in identifying critical problems.

Step 2 Analyze the problem.

The key feature of this step is analyzing the possible causes of a problem and then identifying and ranking the major causes. If a group does not have the resources to evaluate possible causes of complex problems, it may have to request assistance from outside the team.

Step 3 Generate possible solutions.

During this step, a problem-solving group generates many ideas about how to solve the problem identified in Step 1. The output from this step is a listing of potential solutions that are subject to detailed evaluation.

Step 4 Select and plan the solution.

The group must decide the best alternative for solving the problem. The group must also develop implementation plans to carry out the selected alternative. The group should develop contingency plans in the event the primary plan does not succeed.

Step 5 Implement the solution.

This step requires the group to implement the primary plan agreed upon in Step 4. Furthermore, the group must implement any contingency plan as required. Step 6 Evaluate the effectiveness of the solution.

A primary objective of the problem-solving process is the permanent elimination of a problem. This step requires the group to verify that an implemented plan solved the problem identified in the first step.

Verification often requires the development of measurement systems that track the costs created by the problem. If the selected plan does not eliminate the problem, it is the group’s responsibility to continue addressing the problem until it is eliminated.

A buying company that is knowledgeable about problem-solving techniques may find it worthwhile to work with suppliers who are not as familiar. The end result should be an increased ability by the supplier to recognize and solve quality-related problems, which, in turn, means fewer problems for the buyer! i Carlton Berger, Quality Improvement Through Leadership and Empowerment: A Business Survival Handbook, Pennsylvania MILRITE Council, 1991, p.6. ii Handfield and Ghosh, “Creating a Total Quality Culture,” iii Philip Crosby, Quality Improvement Through Defect Prevention: The Individual’s Role (Winter Park, Florida: Philip Cosby Associates, 1985. iv Adapted from Bhote, World Class Quality. v Bhote, World Class Quality, p.29. vi Bhote, World Class Quality, pp.67-68. vii See W.E. Deming, Out of the Crisis, (Cambridge: Massachusetts Institute of Technology, Center for Advanced Engineering Study) 1986.

Recommended publications