<<

informs ® Vol. 36, No. 1, January–February 2006, pp. 6–25 doi 10.1287/inte.1050.0181 issn 0092-2102 eissn 1526-551X 06 3601 0006 ©2006 INFORMS

General Motors Increases Its Production Throughput

Jeffrey M. Alden Corporation, Mail Code 480-106-359, 30500 Mound Road, Warren, Michigan 48090, [email protected] Lawrence D. Burns General Motors Corporation, Mail Code 480-106-EX2, 30500 Mound Road, Warren, Michigan 48090, [email protected] Theodore Costy General Motors Corporation, Mail Code 480-206-325, 30009 Van Dyke Avenue, Warren, Michigan 48090, [email protected] Richard D. Hutton General Motors Europe, International Technical Development Center, IPC 30-06, D-65423 Rüsselsheim, Germany, [email protected] Craig A. Jackson General Motors Corporation, Mail Code 480-106-359, 30500 Mound Road, Warren, Michigan 48090, [email protected] David S. Kim Department of Industrial and Manufacturing Engineering, Oregon State University, 121 Covell Hall, Corvallis, Oregon 97331, [email protected] Kevin A. Kohls Soar Technology, Inc., 3600 Green Court, Suite 600, Ann Arbor, Michigan 48105, [email protected] Jonathan H. Owen General Motors Corporation, Mail Code 480-106-359, 30500 Mound Road, Warren, Michigan 48090, [email protected] Mark A. Turnquist School of Civil and Environmental Engineering, Cornell University, 309 Hollister Hall, Ithaca, New York 14853, [email protected] David J. Vander Veen General Motors Corporation, Mail Code 480-734-214, 30003 Van Dyke Road, Warren, Michigan 48093, [email protected]

In the late 1980s, General Motors Corporation (GM) initiated a long-term project to predict and improve the throughput performance of its production lines to increase productivity throughout its manufacturing oper- ations and provide GM with a strategic competitive advantage. GM quantified throughput performance and focused improvement efforts in the design and operations of its manufacturing systems through coordinated activities in three areas: (1) it developed algorithms for estimating throughput performance, identifying bottle- necks, and optimizing buffer allocation, (2) it installed real-time plant-floor data-collection systems to support the algorithms, and (3) it established common processes for identifying opportunities and implementing perfor- mance improvements. Through these activities, GM has increased revenue and saved over $2.1 billion in over 30 vehicle plants and 10 countries. Key words: manufacturing: performance/productivity; production/scheduling: applications.

ounded in 1908, General Motors Corporation (GM) wide and sold over 8.2 million and trucks—about Fis the world’s largest automotive manufacturer. It 15 percent of the global vehicle markets—earning a has manufacturing operations in 32 countries, and its reported net income of $2.8 billion on over $193 billion vehicles are sold in nearly 200 nations (General Motors in revenues (General Motors Corporation 2005a, b). Corporation 2005b). GM’s automotive brands include GM operates several hundred production lines , , , GMC, , , throughout the world. In North America alone, GM , , Saab, Saturn, and Vauxhall. In 2004, operates about 100 lines in 30 vehicle-assembly plants the company employed about 324,000 people world- (composed of body shops, paint shops, and gen-

6 Alden et al.: General Motors Increases Its Production Throughput Interfaces 36(1), pp. 6–25, ©2006 INFORMS 7 eral assembly lines), about 100 major press lines in greatly: it took plants years, not months, to achieve 17 metal fabrication plants, and over 120 lines in throughput targets for new model lines, typically only 17 engine and transmission plants. Over 400 first-tier after major, costly changes in equipment and lay- suppliers provide about 170 major categories of parts out. Long launches meant lost sales and expensive to these plants, representing hundreds of additional unplanned capital investment in machines, buffers, production lines. and space to increase throughput rates. Several fac- The performance of these production lines is critical tors contributed to inadequate line designs: to GM’s profitability and success. Even though the —Production data was unavailable or practically overall production capacity of the industry is above useless for throughput analysis, crippling engineers’ demand (by about 25 percent globally), demand for ability to verify the expected performance of new certain popular vehicles often exceeds planned plant lines prior to the start of production. capacities. In such cases, increasing the throughput —Inadequate tools for analyzing manufacturing of production lines increases profits, either by adding throughput, mostly discrete event simulation (DES), sales revenue or by reducing the labor costs associated took days or weeks to produce results, during which with unscheduled overtime. engineers may have changed the line design. —Intense corporate pressure to reduce the costs The Need to Increase Throughput for line investment led to overly optimistic esti- In the late 1980s, competition intensified in the auto- mates of equipment reliability, scrap rates, rework motive industry. In North America, this competition rates, space requirements, material-flow capability, was fueled by an increasing number of imports from operator performance, and maintenance performance. foreign manufacturers, customers’ growing expecta- These optimistic estimates were promoted by vendors tions for quality, and slow growth of the overall who rated equipment performance based on ideal market, which led to intense pricing pressures that conditions. limited GM’s opportunities to increase its prices and Under these conditions, many plants had great dif- revenues. In comparison with its foreign competi- ficulty achieving the optimistic throughput targets tors, GM was seen as wasteful and unproductive, during launch and into regular production. GM had seemingly unable to improve, and ineffectively copy- two basic responses: (1) increase its pressure on man- ing Japanese methods without understanding the real agers and plants to improve the throughput rates with production problems. Indeed, in competitive com- the existing investment, or (2) invest additional cap- parisons, The Harbour Report, the leading industry ital in equipment and labor to increase throughput scorecard of automotive manufacturing productivity, rates—often to levels even higher than the original ranked GM near the bottom in terms of produc- target rates to pay for these unplanned investments. tion performance (Harbour and Associates, Inc. 1992). The confluence of these factors severely hindered Many plants were missing production targets, work- GM’s ability to compete; as a result, GM closed ing unscheduled overtime, experiencing high scrap plants and posted a 1991 operating loss of $4.5 billion costs, and executing throughput-improvement initia- (General Motors Corporation 1992)—a record at the tives with disappointing results, while their managers time for a single-year loss by any US company. argued about how to best meet production targets and become more productive and cost effective. GM was Responding to the Need losing money, even with products in high demand, Recognizing the need and the opportunity to improve and was either cutting costs to the penny or opening plant productivity, GM’s research and development the checkbook to solve throughput problems. Man- (R&D) organization began a long-term project to agers were extremely frustrated and had no logical improve the throughput performance of existing plan to guide them in improving plant throughput and new manufacturing systems through coordinated and controlling production costs. efforts in three areas: Vehicle launches (that is, the plant modifications —Modeling and algorithms, required to produce a new vehicle) also suffered —Data collection, and Alden et al.: General Motors Increases Its Production Throughput 8 Interfaces 36(1), pp. 6–25, ©2006 INFORMS

—Throughput-improvement processes. analyze highly complex manufacturing The resulting technical advances, implementation systems within GM. activities, and organizational changes spanned a 2003 GM Manufacturing deployed C-MORE period of almost 20 years. We list some of the mile- tools and throughput-improvement pro- stones in this long-term effort: cesses globally. 2004 GM R&D developed an extensible 1986–1987 GM R&D developed its first analytic callable library architecture for modeling models and software (C-MORE) for esti- and analyzing manufacturing systems to mating throughput and work-in-process support embedded throughput analysis (WIP) inventory for basic serial pro- within various decision-support systems. duction lines and began collecting data and defining a standard throughput- Modeling and Algorithms improvement process (TIP). 1988 GM’s Linden body fabrication plant and In its simplest form, a production line is a series of stations separated by buffers (Figure 1), through its Detroit-Hamtramck general assembly which parts move sequentially until they exit the sys- plant implemented pilots of C-MORE tem as completed jobs; such serial lines are com- and TIP. monly the backbones of many main production lines 1989–1990 GM R&D developed a new analytic de- in automotive manufacturing. Many extensions to composition model for serial lines, ex- this simple serial-line model capture more complex tending C-MORE to accurately analyze production-system features within GM. Example fea- systems with unequal workstation speeds tures include job routing (multiple passes, rework, and to provide approximate performance bypass, and scrap), different types of processing estimates for parallel lanes. (assembly, disassembly, and multiple parts per cycle), 1991 A GM R&D implementation team de- multiple part types, parallel lines, special buffering ployed C-MORE at 29 North American (such as resequencing areas), and conveyance systems GM plants and programs, yielding (chains, belts, transfer bars, return loops, and auto- documented savings of $90 million in a mated guided vehicles (AGVs)). The usual measure single year. of the throughput of a line is the average number of 1994–1995 GM North America Vehicle Operations parts (jobs) completed by the line per hour (JPH). and GM Powertrain formed central staffs Analyzing even simple serial production lines is to coordinate data collection and con- complicated because stations experience random fail- solidate throughput-improvement activi- ures and have unequal speeds. When a single sta- ties previously performed by divisional tion fails, its (finite) input buffer may fill and block groups. upstream stations from producing output, and its out- 1995 GM R&D developed an activity-based put buffer may empty and starve downstream sta- network-flow simulation, extending C- tions of input. Speed differences between stations can MORE to analyze closed carrier sys- also cause blocking and starving. This blocking and tems and lines with synchronous transfer starving creates interdependencies among stations, mechanisms. which make predicting throughputs and identifying 1997 GM expanded the scope of implemen- bottlenecks very difficult. Much research has been tation beyond improving operations to done on the throughput analysis of production lines include designing production systems. (Altiok 1997, Buzacott and Shantikumar 1993, Chen 1999 GM Global Purchasing and Supply Chain and Chen 1990, Dallery and Gershwin 1992, Gershwin began using C-MORE tools for supplier- 1994, Govil and Fu 1999, Li et al. 2003, Papadopoulos development activities. and Harvey 1996, Viswandham and Narahari 1992). 2000 GM R&D developed a new hybrid sim- Although analysts naturally wish to construct very ulation system, extending C-MORE to detailed discrete-event simulation models to ensure Alden et al.: General Motors Increases Its Production Throughput Interfaces 36(1), pp. 6–25, ©2006 INFORMS 9

Figure 1: A simple serial (or tandem) production line is a series of stations separated by buffers. Each unpro- cessed job enters Station 1 for processing during its workcycle. When the workis complete, each job goes into Buffer 1 to wait for Station 2, and it continues moving downstream until it exits Station M. The many descriptors of such lines include station speed, station reliability, buffer capacity, and scrap rates. realistic depiction of all operations in a production —Sensitivity analysis to estimate the impact of system, such emulations are difficult to create, val- varying selected input parameters over specified idate, and transport between locations. The number ranges of values, to account for uncertainty in input of analyses and the time they take make such sim- data, to identify key performance drivers, and to eval- ulation models impractical. In analyzing production uate suggestions for system improvements; and systems, a major challenge is to create tools that are —Throughput distribution (per hour, shift, or fast, accurate, and easy to use. The trade-offs between week) to help managers to plan overtime and these basic requirements motivated us in developing to assess system changes from the perspective of decomposition-based analytic methods and custom throughput variation (less variation is desirable). discrete-event-simulation solvers that we integrated The basic trade-off in the C-MORE tools and with advanced interfaces into a suite of throughput- analysis capabilities is between the complexity of analysis tools called C-MORE. the system that can be modeled and the execution While the specific assumptions we made in mod- speed of the analysis (Figure 2). Because we always eling and analysis depended on the underlying need the fastest possible, accurate analysis, what algorithm used for estimating performance, we gen- performance-estimation method we choose depends erally assumed that workstations are unreliable, with on the complexity of the manufacturing system of deterministic operating speeds and exponentially dis- interest. C-MORE’s basic analysis methods are ana- tributed times between successive failures and times lytic decomposition and discrete simulation. to repair. We also assumed that all buffers have finite capacity and that the overall system is never starved Analytic-Decomposition Methods (that is, jobs are always available to enter) or blocked The analytic solvers we developed represent the (that is, completed jobs can always be unloaded). underlying system as a continuous flow model and Among the performance measures C-MORE provides rely on a convergent iterative solution of a series are the following outputs: of nonlinear equations to generate performance esti- —The throughput rate averaged over scheduled mates. These methods are very fast, which enables production hours, which we use to predict perfor- automatic or user-guided exploration of many alter- mance, to plan overtime, and to validate the model natives for system design and improvement. Further- based on observed throughput rates; more, tractable analytic models require minimal input —System-time and work-in-process averages that data (for example, workstation speed, buffer size, and provide lead times and material levels useful for reliability information). scheduling and planning; Except in the special case of identical workstations, —The average state of the system, which includes no exact analytic expressions are known for through- blocking, starving, processing, and downtimes for put analysis of serial production lines with more than each station and average contents for each buffer; two workstations. However, heuristics exist; they are —Bottleneck identification and analysis, which typically based on decomposition approaches that helps us to focus improvement efforts in areas that use the analysis of the station-buffer-station case will indeed increase throughput and to assess oppor- iteratively as the building block to analyze longer tunities to improve throughput; lines (Gershwin 1987). The fundamental challenge is Alden et al.: General Motors Increases Its Production Throughput 10 Interfaces 36(1), pp. 6–25, ©2006 INFORMS

Fast C-MORE analytic solver - serial lines - approximate simple routing

C-MORE network simulation - closed systems - transfer stations

C-MORE hybrid network/event simulation - complex routing - multiple products with style-based processing - carrier subsystems Execution speed - external downtimes (light curtains, schedules)

Paint shop

General-purpose discrete event simulation Body shop and general assembly Slow Low High System complexity

Figure 2: The basic trade-off among the C-MORE tools and general-purpose discrete-event simulation (DES) is between the complexity of the system that can be modeled and the execution speed of analysis. For typical sys- tems, the C-MORE analytic solver produces results almost instantaneously, while C-MORE simulations usually execute between 50–200 times faster than comparable DESs. Because execution speed is important, throughput engineers typically use the fastest analysis tool that is capable of modeling the system of interest. For example, in the context of GM’s vehicle-assembly operations, the analytic solver and networksimulation tools are most suitable for the production lines found in body shops and general assembly, while the complexity of paint shop operations requires use of the hybrid network/event simulation or general-purpose DES. the analysis of the two-station problem. Our analy- stations rarely fail while idle. We assumed that all sis approach to the two-station problem for our situ- variables are independent and generally unequal. We ation gives a closed-form solution (Alden 2002). We made one special simplifying assumption to handle used this approach in the analytic throughput solver mutual station downtime: when one station fails and of C-MORE. is not working, the remaining station does not fail but We based our analysis on a work-flow model processes at a reduced speed equal to its stand-alone approximation of the two-station system in which sta- throughput rate (its normal operating speed multi- tions act like gates on the flow of work and buffers act plied by its stand-alone availability). This assumption like reservoirs holding work. This paradigm allows simplifies the analytic analysis by allowing conve- us to use differential equations instead of the more nient renewal epochs (for example, at station repairs) difficult difference equations. We assumed that sta- while still producing accurate results. We believe that tion speeds are constant when processing under ideal we can relax this last assumption and still produce a conditions (no blocking, starving, or failures). We also closed-form solution—a possible topic of future work. assumed that each station’s operating time between Our analysis approach focuses on the steady-state failures and its repair time are both distributed as probability distribution of buffer contents at a ran- negative exponential random variables. We used domly selected time of station repair. Under the (cor- operating time instead of elapsed time, because most rect) assumption that this distribution is the weighted Alden et al.: General Motors Increases Its Production Throughput Interfaces 36(1), pp. 6–25, ©2006 INFORMS 11 sum of two exponential terms plus two impulses deficiencies of existing analytic methods, we devel- at zero and full buffer contents, we can derive and oped and implemented new simulation algorithms for solve a recursive expression by considering all pos- analyzing GM’s production systems. sible buffer-content histories between two consecu- In these simulation algorithms, we used activity- tive times of station repair (Appendix). Given this network representations of job flow and efficient data distribution, we can derive closed-form expressions structures (Appendix). We used these representations for throughput rate, work in process, and system to accurately simulate the interaction of machines time. These expressions reduce to published results and the movement of jobs through the system, with- for special cases, for example, equal station speeds out requiring all events to be coordinated through (Li et al. 2003). Extensive empirical observation and a global time-sorted queue of pending actions. We simulation validations have shown this approach to implemented two simulation algorithms as part of be highly accurate, with errors typically within two the C-MORE tool set (Figure 2); one is based entirely percent of observed throughput rates for actual sys- on an activity-network representation, while the other tems at GM that have many stations. Large prediction augments such a representation with the limited use errors in practice are usually caused by poor quality of an event queue to support analysis of the most data inputs; in these cases, we often use C-MORE to complex manufacturing systems. Our internal bench- help us to identify the stations likely to have data- marking of these methods demonstrated that their collection problems by using sensitivity analysis and analysis times are typically 50 to 200 times faster than comparisons with historical results. We obtain per- those of comparable DES methods and their results formance estimates for typical GM systems almost are identical. Chen and Chen (1990) reported dramatic instantaneously and bottleneck and sensitivity analy- savings in computational effort for similar simulation ses in a few seconds on a desktop PC. approaches. With the C-MORE simulation tools, we provide Discrete Simulation Methods modeling constructs designed specifically for repre- The analytic method we described and other simi- senting GM’s various types of production systems. lar methods provide very accurate performance esti- Examples include “synchronous line” objects to rep- mates for simple serial lines. Unfortunately, they are resent a series of adjacent workstations that share not applicable to GM’s most complex production sys- a single transfer mechanism to move jobs between tems (for example, its paint shops). These production the workstations simultaneously, convenient mecha- systems employ splits and merges with job-level rout- nisms to maintain the sequence order of jobs within ing policies (for example, parallel lanes and rework parallel lanes, and routing policies to enforce restric- systems), use closed-loop conveyances (such as pallets tions at splits and merges. With such modeling con- or AGVs), or are subject to external sources of down- structs, the C-MORE simulation tools are much easier time that affect several disjoint workstations simul- to use than general-purpose DES software for mod- taneously (for example, light-curtain safety devices eling GM’s production systems. Although C-MORE’s for nonadjacent operations). To analyze such fea- usability comes at the expense of flexibility, it pro- tures with the available analytic-decomposition meth- vides two important benefits: First, end users spend ods, one must use various modeling tricks that yield far less time on model development than they would approximate and sometimes misleading results. with commercial DES software (minutes or hours ver- General-purpose discrete-event-simulation (DES) sus days or weeks); GM Powertrain Manufacturing, software can analyze complex production systems for example, reported that simulation-analyst time per with a high degree of accuracy. It takes too long, line-design project has dropped by 50 percent since however, to develop and analyze these models for it began using these tools in 2002. Second, the use of extensive what-if scenario analysis. GM needs such common modeling constructs facilitates communica- extensive scenario analysis to evaluate large sets tion and transportability of models among different of design alternatives and to explore continuous user communities within the company; a benefit that improvement opportunities. To overcome the limita- is especially important in overcoming organizational tions of the available DES software and the modeling and geographic boundaries between global regions. Alden et al.: General Motors Increases Its Production Throughput 12 Interfaces 36(1), pp. 6–25, ©2006 INFORMS

Delivery Mechanism eling from analysis capabilities (much like AMPL To facilitate use of the C-MORE analysis tools (both decouples mathematical modeling from such solvers analytic and simulation approaches), we designed as CPLEX or OSL). The C-MORE software architec- and developed an extensible, multilayer software ture provides analysis capabilities through “solvers” architecture, which serves as a framework for deploy- and “optimization modules.” We use the term solver ing the C-MORE tools (Figure 3). We implemented to refer to a software implementation of an algo- it as a set of callable libraries to provide embed- rithm that can analyze a model of a manufacturing ded access to identical modeling and analysis capa- system to produce estimates of system performance bilities for any number of domain-specific end-user (for example, throughput, WIP levels, and station- software applications. This approach promotes con- level blocking and starving). Solvers may employ sistency across the organization and allows disparate different methods to generate results, including ana- user groups with divergent interests to access iden- lytic and simulation methods. We use the term tical analysis capabilities while sharing and reusing optimization module to refer to a higher-level analy- system models. As a concrete example, the through- sis that addresses a specific problem of interest to put predictions obtained by engineers designing a GM, typically by executing a solver as a subroutine future manufacturing line are consistent with the within an iterative algorithmic framework. For exam- results obtained later by plant personnel using the ple, C-MORE includes a bottleneck-analysis module line’s actual performance data to conduct continuous- that identifies the system components that have the improvement activities, even though the two user greatest impact on system throughput. Other exam- groups access the C-MORE tools through separate ples of optimization modules include carrier analy- user interfaces. sis, which determines the ideal number of pallets or The C-MORE software architecture provides a mod- carriers to use in a closed-loop system, and a buffer- eling isolation layer, which decouples system mod- optimization module, which heuristically determines the distribution of buffer capacity throughout the sys- tem according to a given objective, such as, maximum User User throughput. Each of these optimization modules com- interface interface prises an abstract modeling interface and one or more algorithm implementations. Generally, these optimization modules can use any available solver to estimate throughput performance. Modeling isolation layer Because its modeling and analysis capabilities are decoupled, the C-MORE software architecture allows Optimization users to reuse system models with different analy- Solvers modules sis approaches. In addition, the extensible architecture C-MORE Bottleneck design facilitates rapid deployment of new analysis analytic analysis capabilities and provides a convenient mechanism for developing and testing new approaches. C-MORE Buffer C-MORE callable library simulation optimization Data Collection Collecting production-line data is a difficult and mas- sive task. A line may have hundreds of workstations, each with its own processing times and operating Figure 3: The C-MORE callable library decouples system modeling from parameters. Furthermore, models with many data analysis and provides embedded throughput-analysis capabilities for mul- inputs tend to be complex and time consuming to tiple, domain-specific user interfaces. The software architecture design supports future extendibility as new analysis techniques are developed analyze. Hence, for practical use, it is important to and modeling scope is broadened. develop models and algorithms with modest data Alden et al.: General Motors Increases Its Production Throughput Interfaces 36(1), pp. 6–25, ©2006 INFORMS 13 requirements that still produce meaningful results. To plants by training employees, analyzing data, and determine the appropriate data inputs, we conducted providing on-site expertise. repeated trials and validation efforts that eventually defined the minimal data requirements as a function Automatic Data Collection, Validation, of production-line characteristics (routing, scrapping, and Filtering The automated machines and processes in automotive failure modes, automation, conveyance systems, and manufacturing facilities include robotic welders, so on). stamping presses, numerically controlled (NC) In our first pilot deployments, we had almost machining centers, and part-conveyance equipment. no useful data to support throughput analyses. The Industrial computers called programmable logic data available was often overwhelming in quantity, controllers (PLCs) provide operating instructions inaccurate, stale, or lacking in internal consistency. and monitor the status of production equipment. To GM had not established unambiguous standards that automate data collection, we implemented PLC code defined common measures and units. This lack of (software) that counts production events and records useful data presented a tremendous obstacle. Good their duration; these events include machine faults data collection required ongoing tracking of changes and blocking and starving events. Software summa- in workstation speed, scrap counts, and causes of rizes event data locally and transfers them from the workstation stoppages (equipment failures, quality PLC into a centralized relational database. We use stops, safety stops, part jams, feeder-line starving, and separate data-manipulation routines to aggregate the so forth). We gradually instituted standardized data- production-event data into workstation-performance collection practices to address these issues. Early on, characteristics, such as failure and repair rates, and we collected and analyzed well-defined data manu- operating speeds. We then use the aggregated data ally at several plants to prove its value in supporting as input to C-MORE analyses to validate models, to throughput analysis. This success helped us to justify identify bottlenecks, and to improve throughput. manual data-collection efforts in other plants for the We validate and filter data at several stages, which purpose of increasing throughput rates. It also helped helps us to detect missing and out-of-specification us to understand what data we needed and obtain data elements. In some cases, we can correct these corporate support for implementing automatic data data anomalies manually before we analyze the plant- collection in all GM plants. floor data with C-MORE. We compare the C-MORE Automatic data collection presented further chal- throughput estimates to actual observed throughput lenges. GM’s heritage as a federation of disparate, to detect errors in the model or data. We pass all data independently managed businesses meant that its through the various automatic filters before using plants employed diverse equipment and technologies. them to improve throughput. GM collects plant-floor In particular, plant-floor control technologies varied data continually, creating huge data sets (typically widely from plant to plant, and standard interfaces from 15,000 to 25,000 data points per shift, depending did not exist. We could not deploy data-collection on plant size and the scope of deployment). Therefore, tools and techniques we developed at one plant we must regularly archive or purge historical perfor- to other plants without modifying them extensively. mance data. Likewise, we could not readily apply the lessons we In general, PLCs can detect a multitude of event learned at one plant to other plants. We needed com- types; we had to identify which data are meaning- mon equipment, control hardware and software, and ful in estimating throughput. The C-MORE analysis standard interfaces. GM formed a special implemen- tools helped GM to determine which data to col- tation team to initiate data collection and begin imple- lect and standard ways to aggregate the data. GM’s mentation of throughput analysis in GM’s North standardization of PLC hardware and data-collection American plants. This activity continues today, with techniques has helped it to implement C-MORE and established corporate groups that develop and sup- the throughput-improvement process (TIP) through- port global Web-accessible automatic data collection. out North America, and it makes its ongoing global- These groups also support throughput analysis in all ization efforts possible. Alden et al.: General Motors Increases Its Production Throughput 14 Interfaces 36(1), pp. 6–25, ©2006 INFORMS

Historical Data for Line Design and we developed a well-defined process the plant As we collected data, we developed a database would own to support and reward ongoing through- of actual historical station-level performance data. put improvements. To this last point, we developed This database, which is organized by equipment and taught a standard TIP to promote the active type (make and model) and purpose, has become and effective use of labor, management, data collec- an integral part of GM’s processes for designing tion, and throughput analysis to find and eliminate new production lines. Previously, GM’s best sources throughput bottlenecks. The TIP consisted of the fol- of performance estimates for new equipment were lowing steps: the machine specifications equipment vendors pro- —Identify the problem: Typically, it’s the need to vided. Because vendors typically base their ratings on improve throughput rates. assumptions of ideal operating conditions, the ratings —Collect data and analyze the problem: Use are inherently biased and often unsuitable as inputs throughput analysis to find and analyze bottlenecks. to throughput-analysis models. GM engineers now —Create plans for action: The planners are usu- have a source of performance estimates that are based ally multidisciplinary teams of engineers, line oper- on observations of similar equipment in use within ators, maintenance personnel, suppliers, managers, actual production environments. Using this histori- and finance representatives. cal data, they can better predict future performance —Implement the solution: Assign responsibilities characteristics and analyze proposed production-line and obtain commitment to provide implementation designs realistically, setting better performance targets resources. and improving GM’s ability to meet launch timetables —Evaluate that the solution: Ensure that the solu- and throughput targets. tion is correctly implemented and actually works. —Repeat the process. Throughput-Improvement Processes To further cultural transformation, we distributed In our pilot implementations of throughput analysis free copies of The Goal (Goldratt and Cox 1986) to per- in GM during the late 1980s, we faced several cul- sonnel at all plants. We conducted classes in the basic tural challenges. First, we had difficulty assembling theory of constraints (TOC) to help plant personnel all the right people and getting them to focus on a understand, from a very fundamental perspective, the single throughput issue unless it was an obvious cri- definition and importance of production bottlenecks, sis. Because those who solved large crises became why they can be difficult to identify, and why we heroes, people sought out and tackled the big prob- needed C-MORE to find them. lems quickly, rather than working on the little per- With the TIP in place and the support of plant man- sistent problems. These little problems often proved agers and personnel, plants soon improved through- to be the real throughput constraints, and identify- put and discovered new heroes—the teams that ing and quantifying them requires careful analysis. quickly solved persistent throughput problems and Second, our earliest implementation efforts appeared helped the plants meet their production targets. To to be yet another set of not-invented-here corporate further reward and promote the use of throughput program-of-the-month projects, to be tolerated until analysis and TIP, GM gave a popular “Bottleneck they fizzled out. As a result, we had difficulty get- Buster” award to plants that made remarkable ting the support and commitment we needed for improvements in throughput. successful implementation. Finally, plant personnel After plants improved throughput of existing pro- lacked confidence that a black-box computer program duction lines, we applied the process in a similar could really understand the complexities of a produc- fashion to designing new production lines. We estab- tion line. lished formal workshops, especially in powertrain We responded to these problems in several ways. (engine and transmission) systems, to iteratively find We prepared solid case studies and excellent training and resolve bottlenecks using the C-MORE tools and materials to prove the value of throughput analysis, a cross-functional team of experts. Alden et al.: General Motors Increases Its Production Throughput Interfaces 36(1), pp. 6–25, ©2006 INFORMS 15

An Early Case Study—The using C-MORE, and he found the top bottleneck oper- Detroit-Hamtramck Assembly Plant ation: the station installing the bulky headliner in the ceiling of each car. The data showed this station fail- In 1988, GM conducted a pilot implementation of ing every five cycles for about one minute. Although C-MORE and TIP at the Detroit-Hamtramck assem- every vehicle had to pass through this single oper- bly plant, quickly achieving substantial throughput ation, managers had never considered this station a gains and cost savings. This experience serves as a significant problem area because its downtime was good example of improvements that can be realized small and no equipment actually failed. The engineer through the use of these tools. investigated the operation and found that the oper- In the late 1980s, GM considered the Detroit- Hamtramck plant a model for its future, with hun- ator worked for five cycles, stopped the line, went dreds of training hours per employee and vast to get five headliners, and then restarted the line. amounts of new (and untested) advanced manu- The downtime was measured at slightly less than facturing technology throughout the facility. It was one minute, and it occurred every five cycles. Further designed to produce 63 JPH with no overtime; yet the investigation revealed that the underlying problem plant could barely achieve 56 JPH even with over- was lack of floor space. The location of a supervisor’s time often exceeding seven person-hours per vehi- office prevented the delivery and placement of the cle. The associated costs of lost sales and overtime headliners next to the line. Ironically, the plant had exceeded $100,000 per shift. Rather than increase effi- previously considered relocating the office but had ciency, the plant’s advanced manufacturing technol- delayed the move because managers did not consider ogy actually hindered improvement efforts because it a high priority. Based on this first C-MORE analy- the added operational complexity made it difficult sis, the engineer convinced the planner to move the for plant workers and managers to determine where office, despite its low priority. On the Monday after to focus improvement efforts. Under intense pres- the move, the headliners were delivered close to the sure to achieve its production targets, the plant line and the line stops ended. Not surprisingly, the worked unscheduled overtime and conducted many throughput of the whole plant increased. projects to increase throughput rates. Unfortunately, Up to this point, the prevailing opinion had been most of these initiatives produced meager results that plant operations were so complex that one could because they focused on local improvements that did not predict cause-and-effect relationships, and the not affect overall system throughput. The lack of best one could do was to fix as many things as pos- improvement caused tremendous frustration and fur- sible and hope for the best. Now GM could hope to ther strained the relationships among the labor union, systematically improve plant throughput. plant management, and corporate management. We repeatedly collected data and addressed bot- In 1988, GM researchers presented their new tlenecks until the plant met its throughput targets throughput-analysis tools (an early version of consistently and cut overtime in half (Figure 4). The C-MORE) to the Detroit-Hamtramck plant man- plant’s achievement was recognized and documented ager, who in turn asked a plant-floor systems engi- in The Harbour Report, which placed the Detroit- neer to visit the GM Research Labs to learn more. Hamtramck plant in the top 10 most-improved Through interactions with GM researchers, the engi- vehicle-assembly plants with a 20 percent improve- neer learned about bottlenecks, throughput improve- ment in productivity (Harbour and Associates, Inc. ment, and C-MORE’s analysis capabilities. He also 1992). read the book they recommended, The Goal (Goldratt Because of this success, plant operations changed. and Cox 1986), to learn more about continuous First, the language changed; such terms as bottlenecks, improvement. stand-alone availability (SAA), mean cycles between fail- Back at the plant, he started collecting workstation ures (MCBF), and mean time to repair (MTTR) became performance data. After several days, he had gath- part of the vernacular. Armed with the C-MORE anal- ered enough data to model and analyze the system ysis, the plant could prioritize repairs and changes Alden et al.: General Motors Increases Its Production Throughput 16 Interfaces 36(1), pp. 6–25, ©2006 INFORMS

64 9 overall training. The results were remarkable: plants Overtime Throughput 62.8 8 achieved dramatic throughput increases (over 20 per- ) ) 62 7.1 7 cent was common), reduced their overtime hours, and

JPH 6 ( often met their production targets for the first time. vehicle 60 / 5 Our most conservative accounting of dollar impact

4 hours 58 ( for this effort was over $90 million in a single year 3.4 3 (1991) through C-MORE and TIP implementations for 56.1 2 Throughput rate 56 29 GM plants and car programs. This work proved Overtime 1 the value and feasibility of throughput analysis. 54 0 Nov-1988 Dec-1988 Jan-1989 Feb-1989 Mar-1989 Apr-1989 May-1989 To build on the success of the implementation project team, central management established a ded- Figure 4: Within six months of using C-MORE in the Detroit-Hamtramck icated staff to spread the use of C-MORE and TIP assembly plant in November 1988, we found and removed bottlenecks, to other plants. Thus, in 1994, it formed a cen- increased throughput by over 12 percent, attained the 63 jobs-per-hour tral organization within GM North America Vehi- (JPH) production target, and cut overtime hours per vehicle in half. cle Operations to continually help plants with all aspects of throughput analysis, education, and imple- more effectively than ever before. Accurate data col- mentation. Over time, this group, which grew to lection exposed the true performance characteristics almost 60 members, developed common C-MORE of workstations, and the finger pointing that caused data-collection standards as part of all new equipment distrust in the plant gave way to teamwork, with one going into plants and produced Web-enabled bottle- area of the plant volunteering resources to another to neck reports for use across the corporation. It devel- help it to improve the bottleneck station. oped training courses and custom games to teach Building on this successful pilot, the Detroit- employees and motivate them to use throughput- Hamtramck plant continued its productivity improve- improvement tools and concepts. Improvement teams ment efforts in the ensuing years and achieved results identified and attacked launch problems before new that were described as “phenomenal” in The Harbour equipment started running production parts. Integral Report North America 2001 (Harbour and Associates, to this activity, C-MORE is now a standard corpo- Inc. 2001). rate tool for improving plant throughput and design- ing production lines. To support its widespread use, Implementation and the Evolution of GM undertook several activities: —GM integrated C-MORE into multiple desktop the Organization applications to provide various users easy access to Even after early pilot successes, it took a special effort throughput-analysis capability. and top management support to implement through- —GM provided formal training and software to put analysis widely within GM. In 1991, with a vice over 400 global users. president as eager champion, we formed a special —GM formed a C-MORE user’s group to share implementation-project team. This team consisted of developments, lessons, and case studies across the 10 implementation members and 14 supporting mem- organization and summarize results in a widely dis- bers to help with management, consultation, training, tributed newsletter. programming, and administrative work. Our stated —GM established common methods and standards goal was to “enable GM’s North American plants to for data collection to support throughput analysis and save $100 million in 1991 though the implementa- its rapid implementation worldwide. tion of C-MORE.” We trained team members and sta- —GM established a “Bottleneck Buster” award to tioned them in 15 of the most critical plants. They recognize plants that make remarkable improvements collected data, modeled and analyzed systems using in throughput. C-MORE to find the top throughput bottlenecks, ran We are continuing to expand the use of the TIP meetings, tracked performance, and provided C-MORE tools and TIP in GM’s global operations. Alden et al.: General Motors Increases Its Production Throughput Interfaces 36(1), pp. 6–25, ©2006 INFORMS 17

The C-MORE tools and data provide a common plat- revenue, and it has changed GM’s processes, organi- form and language that engineers, managers, and zation, and culture. analysts across GM’s worldwide operations use to develop and promote common best-manufacturing Profits practices. Standard data definitions and performance Use of the C-MORE tools and the TIP for improving measures facilitate communication among GM’s - production throughput alone yielded over $2.1 bil- graphically dispersed operations and enable the fast lion in documented savings and increased revenue, transfer of lessons learned among manufacturing reflecting reduced overtime and increased sales for facilities, no matter where they are located. C-MORE high-demand vehicles. Although it is impossible to underpins GM’s new global manufacturing system measure cost avoidance, internal users estimate the (GMS), which it is currently deploying throughout its additional value of using the C-MORE tools in design- worldwide operations. ing manufacturing systems for future vehicles to be Because of the success GM realized internally using several times this amount; they have improved deci- C-MORE and TIP, it formed a separate corporate sions about equipment purchases and GM’s ability to organization to conduct supplier-development initia- meet production targets and reduce costly unplanned tives. Through its activities, GM can improve the investments. We have not carefully tracked the impact manufacturing capabilities of external part suppliers on profits of using the C-MORE tools in developing by helping them to design and improve their produc- suppliers. tion systems. Previously, GM plants had to react to unexpected constraints on part supplies (often dur- Competitive Productivity ing critical new vehicle launches); now GM engineers The use of C-MORE and the TIP has improved GM’s collaborate with suppliers to proactively develop and ability to compete with other automotive manufactur- analyze models of suppliers’ manufacturing systems. ers. The remarkable improvements GM has achieved With GM’s assistance, key suppliers are better able have been recognized by The Harbour Report, the lead- to design systems that will meet contractual through- ing industry scorecard of automotive manufacturing put requirements. As a result, GM’s suppliers become productivity. When GM first began widespread imple- more competitive, and GM avoids the costs and neg- mentation of C-MORE in the early 1990s, not one GM ative effects of part shortages. In one recent example, plant appeared on the Harbour top-10 list of most 12 (out of 23) key suppliers for a new vehicle program productive plants. In fact, of the 15 least productive had initially designed systems that were incapable of plants measured by Harbour from 1989 through 1992, meeting throughput-performance requirements; GM 14 were GM plants. During this same time period, discovered this situation and helped the suppliers to The Harbour Report estimate for GM’s North American resolve it prior to the vehicle launch period, avoid- manufacturing capacity utilization was 60 percent, ing costly delays. Although their impact is diffi- and its estimate of the average number of workers cult to quantify, these supplier-development initia- GM needed to assemble a vehicle was more than tives become increasingly important as GM relies 50 percent higher than its largest domestic competi- more heavily on partnerships with external suppliers. tor, (Harbour and Associates, Inc. 1992). This unbiased external assessment of GM’s core manufacturing productivity clearly attested to its Impact need to improve throughput. The use of the C-MORE tools, data, and the through- In the ensuing years, GM focused continu- put-improvement process pervades GM, with appli- ally on increasing throughput, improving vehi- cations to improve manufacturing-system operations, cle launches, and reducing unscheduled overtime. production-system designs, supplier development, C-MORE enabled these efforts, helping GM to regain and production-systems research. The impact of work a competitive position within the North American in these areas is multifaceted. It has yielded measured . GM has made steady, year-after- productivity gains that reduced costs and increased year improvements in manufacturing productivity Alden et al.: General Motors Increases Its Production Throughput 18 Interfaces 36(1), pp. 6–25, ©2006 INFORMS

50 Line Design GM engineers often design new production lines )

) General Motors 46.5 in cross-functional workshops whose participants try 45.5 DaimlerChrysler HPV ( 45 to minimize investment costs while achieving qual- ity and throughput goals. These workshops use C-MORE routinely because they can quickly and eas- 40 ily develop, analyze, and comprehend models. Using

37.0 C-MORE, they can evaluate hundreds of line designs Ford Motor 35.9 for each area of a plant, whereas in the past they 35 34.7 34.3 considered fewer than 10 designs because of limited assembly, stamping and powertrain Total labor hours per vehicle ( data and analysis capability. Also, GM now has good

30 historical data on which to base models. Although 1997 1998 1999 2000 2001 2002 2003 2004 the impact of line designs on profit is more diffi- cult to measure than the impact of plant improve- Figure 5: Productivity improvements in General Motors’ North American manufacturing operations have outpaced those of its two major domes- ments, GM users estimate that line designs that meet tic competitors, DaimlerChrysler and Ford Motor Company. From 1997 launch performance goals have two to three (some- through 2004, GM’s total labor hours per vehicle improved by over 26 per- times up to 10) times the profit impact of through- cent (Harbour and Associates, Inc. 1999–2003, Harbour Consulting 2004). put improvements made after launch. This impact is largely because of better market timing for prod- since 1997 (Figure 5). Its earlier use of C-MORE also uct launches and less unplanned investment dur- led to measurable improvements, but direct compari- ing launch to achieve throughput targets. By using son with prior years is problematic because Harbour C-MORE, GM has obtained such savings in several Consulting changed its measurement method in 1996. recent product launches, achieving record ramp-up According to the 2004 Harbour Report, GM’s total times for the two newest engine programs in GM’s labor hours per vehicle dropped by over 26 percent Powertrain Manufacturing organization. since 1997. GM now has four of the five most produc- tive assembly plants in North America and sets the Plant Operations and Culture benchmark in eight of 14 vehicle segments (Harbour GM’s establishment of throughput analysis has Consulting 2004). The estimate of overall vehicle- fostered important changes in its manufacturing assembly capacity utilization is now 90 percent. In plants. First, they use throughput sensitivity analy- addition, GM’s stamping operations lead the industry sis and optimization to explore, assess, and promote in pieces per hour (PPH), and GM’s engine opera- continuous-improvement initiatives. They can use it tions lead the US-based automakers. C-MORE and to resolve questions (and avoid speculation) about TIP, combined with GM’s new global manufacturing buffer sizes and carrier counts. system (GMS) and other initiatives, have enabled GM Throughput analysis forces plants to consider all to make these outstanding improvements. stations of a line as integral components of a sin- gle system. This perspective, together with the cross- Standard Data Collection functional participation in TIP meetings, reinforces Common standards for data definitions, plant-floor a focus on system-level performance. With many data-collection hardware and software, and data- local improvement opportunities competing for lim- management tools and practices have made the ited resources (for example, engineers, maintenance, otherwise massive effort of implementing and sup- and financial support), C-MORE and TIP help the porting throughput analysis by GM engineers and in plants concentrate their efforts on those initiatives GM plants a well-defined, manageable activity. Plant- that improve bottleneck performance and have the floor data collection and management is now a global greatest impact on overall system throughput. This effort, with particular emphasis in Europe and the change greatly reduces the frustration of working to Asia-Pacific region. get project resources, only to find the improvements Alden et al.: General Motors Increases Its Production Throughput Interfaces 36(1), pp. 6–25, ©2006 INFORMS 19 did little to improve throughput; with throughput —Understanding our data-collection capabilities analysis, such efforts find quick support and provide and identifying the gaps. satisfying results. —The importance of accurate and validated plant- Tailored throughput reports improve scheduling of floor data for analyzing operations improvements and reactive and preventive maintenance by ranking areas for designing new systems. according to their importance to throughput. These —The importance of basing decisions on objective reports also enhance plant suggestion programs; man- data and analysis, not on beliefs based on isolated agers can better evaluate and reward suggestions for experiences. improving throughput, acting on the better-targeted —The impact of variability on system performance suggestions and obtaining higher overall improve- and the importance of managing buffers to protect ments. They thus motivate employees to seek and system performance. prevent problems rather than reacting only to the —Reducing repair times to improve throughput problem of the moment. and decrease variability. The plants have integrated throughput analysis in —Understanding that extreme applications of the their organizations by designating throughput engi- lean mantra (zero buffers) are harmful. neers to oversee data collection, to conduct through- —Understanding that bottleneck operations aren’t put analyses, to help with problem solving, and to simply those with the longest downtimes or lowest support TIP meetings. They have made this commit- ment even though they are under pressure to reduce stand-alone availability. indirect labor costs. —The need for robust techniques to find true bot- The C-MORE users group (active for several years) tlenecks. and the central staff that support it collect and commu- —Understanding that misuse of simulation analy- nicate best practices for throughput analysis and data sis can foster poor decisions. collection among the plants. They thus further support —The importance of consistency in modeling and the continuous improvement of plant operations and in the transportability of models and data. capture lessons learned for use by many plants. —The use of sensitivity analysis to understand key inputs and modeling assumptions that work well in Organizational Learning practice. The organization has learned many lessons during its —The impact of early part-quality assessment on effort to institute throughput analysis. While some of cost and throughput. these lessons may seem straightforward from a purely The organizational changes made during the course academic perspective—or to an operations research of establishing throughput analysis included the for- (OR) professional experienced in manufacturing- mation of central groups to serve as “homerooms” for systems analysis—they were not apparent to the throughput knowledge and to transfer best practices organization’s managers and plant workers. Devel- and lessons learned among plants. Ultimately, GM’s oping an understanding of these concepts at an inclusion of throughput analysis in its standard work organizational level was challenging and required paradigm shifts in many parts of the company; in and validation requirements testify to its success in some cases, the lessons contradicted active initiatives. learning these lessons. We conducted training sessions at all levels of the organization, from management to the plant floor. Stimulation of Manufacturing-Related Operations This training took many forms, from introductory Research Within GM games that illustrate basic concepts of variability and The success of throughput analysis in GM has bottlenecks while demonstrating the need for data- exposed plant and corporate staffs to OR meth- driven analysis, to formal training courses in the tools ods and analysis. General Motors University, GM’s and processes. Here are some lesson topics: internal training organization, has had over 4,500 —Understanding what data is needed and (more enrollments in its eight related courses on TOC, important) what data is not needed. the C-MORE analysis tools, data collection, and TIP. Alden et al.: General Motors Increases Its Production Throughput 20 Interfaces 36(1), pp. 6–25, ©2006 INFORMS

Employees’ growing knowledge has greatly facil- motive industry. In addition, our work has fostered itated the support and acceptance of further an increased use of OR methods to support decision manufacturing-related OR work in GM. Since this making in many other business activities within GM. project began, GM has produced over 160 inter- From a broader perspective, our project represents nal research publications concerning improving plant a success story of the application of OR and high- performance. GM researchers have developed and lights the multifaceted approach often required to implemented analysis tools in related areas, including achieve such success in a large, complex organiza- maintenance operations, stamping operations, plant- tion. The impact of such projects goes beyond profit. capacity planning, vehicle-to-plant allocation, compo- At GM, this effort has advanced organizational pro- nent make-buy decisions, and plant layout decisions. cesses, employee education, plant culture, production These tools generally piggyback on C-MORE’s repu- systems research and related initiatives, and it has tation, data collection, and implementations to gain greatly increased the company’s exposure to OR. entry into plants and line-design workshops. Many of these tools actually use embedded throughput cal- Appendix culations from C-MORE through its callable library interface. The cumulative impact on profit of the addi- An Analytic Solution for the Two-Station Problem tional OR work enabled by our throughput-analysis The solution of the two-station problem can be used work is an indirect benefit of the C-MORE effort. as a building block to analyze longer lines using a decomposition approach that GM developed in the Concluding Remarks late 1980s and recently disclosed (Alden 2002). We first describe the solution of the two-station problem Improving GM’s production throughput has required with unequal speeds and then discuss the solution a sustained effort for over 15 years and has of longer lines. We begin with a job-flow approxima- yielded tangible results that affect its core business— tion of the two-station problem under the following manufacturing cars and trucks. Today, we continue assumptions and notation: this effort on a global scale in both plant opera- tions and line design with strong organizational sup- Assumption 1. Jobs flow though the system so that for port. The benefits include reduced costs and increased each fraction of a work cycle completed in processing a job, revenues. Our work has established standardized, the same fraction of the job moves though the station and enters its downstream buffer. automatic collection of data, fast and easy-to-use throughput-analysis tools, and a proven and sup- Assumption 2. The intermediate buffer, on its own, ported throughput-improvement process. Success in does not delay job flow, that is, it does not fail and has zero this work results from a focused early implementa- transit time. It has a finite and fixed maximum job capac- tion effort by a special task team followed by estab- ity of size B. (We increase buffer size by one to account for lishment of a central support staff, plant ownership the additional buffering provided by in-station job storage of throughput-analysis capability with designated in the representation of the job flow.) throughput engineers, strong management support Assumption 3. Station i’s i = 1 2 processing speed Si and recognition of successes, and an ongoing collab- is constant under ideal processing conditions, that is, when oration between Research and Development, central not failed, blocked, or starved. support staffs, and plant personnel. GM’s need for fast and accurate throughput- Assumption 4. While one station is down, the other analysis tools gave us exciting opportunities to apply station does not fail, but its speed drops to its normal and extend state-of-the-art results in analytic and sim- speed times its stand-alone availability to roughly account ulation approaches to throughput analysis and opti- for possible simultaneous downtime. We thus simplify the mization and to conduct research on many topics analysis with little loss of accuracy. related to the performance of production systems. Assumption 5. Station i’s processing time between GM recognizes that much of this work provides it failures (not the elapsed time between failures) is an expo- with an important competitive advantage in the auto- nentially distributed random variable with a fixed failure Alden et al.: General Motors Increases Its Production Throughput Interfaces 36(1), pp. 6–25, ©2006 INFORMS 21

rate i. This means a station does not fail while idle, that is, similar manner, let qxˆ or qxQB be the full prob- when it is off, failed, under repair, blocked, or starved. ability state distribution at a station failure (note for S >S , Q = 0). Assumption 6. Station i’s repair time is an expo- 1 2 0 Under our assumptions, the system has conve- nentially distributed random variable with a fixed repair nient renewal times at station repairs, that is, when rate  . i the probability distributions of the system’s state are Assumption 7. The first station is never starved and identical. We break the evolution of the distribution of the second station is never blocked. buffer contents between subsequent times of station repairs into parts: (a) evolution from station repair to Assumption 8. All parameters (station speed, buffer a station failure, and (b) evolution from station failure size, failure rates, and repair rates) are positive, indepen- to its repair (Figure 6). dent, and generally unequal. For S1 >S2, there are five general cases of buffer- Assumption 9. The system is in steady state, that is, content evolution from station repair to subsequent the probability distribution of the system state at a ran- failure (Figure 7) and seven general cases of buffer- domly selected time is independent of the initial conditions content evolution from station failure to subsequent and of the time. repair (Figure 8). Assumption 4 greatly simplifies For now, we also assume that the processing speed this case analysis by reducing the infinite number of Station 1 is faster than that of Station 2, that is, of possible station failure-and-repair sequences before S >S . While both stations are operational, the finite- both stations are operational to a manageable finite 1 2 number. capacity buffer will eventually fill because S >S. 1 2 By considering all five cases of buffer-content evo- While the buffer is full, Station 1 will be idle at the lution from repair to failure, we can express qxˆ end of each cycle until Station 2 finishes its job and in terms of pxˆ . By considering all seven cases of loads its next job from the buffer. This phenomenon is buffer-content evolution from failure to repair and the called speed blocking of Station 1. (A similar speed starv- renewal property of station repairs, we can derive ing of Station 2 occurs when S

2x 3x completely described by the buffer contents (ranging px = a2P0e + a3P0e  where from 0 to B), and the associated state probability dis-  + r  + r  + r  + r tribution is given by a = 2 2 2 a= 3 2 3  2  −  3  −  P = probability the buffer content is zero (empty) 2 3 3 2 0  +    +     +   when a station is repaired, r = 1 2 r= 1 2 2 r= 2 1 1  S − S 1 S  2 S  PB = probability the buffer content is B (full) when 1 2 2 2 1 1 a station is repaired, and r − r + r  =− 1 2 px = probability density that the buffer content is x 2 2  for 0 ≤ x ≤ B when a station is repaired,     r − r + r 2  excluding the finite probabilities (impulses) at + 1 2 + 2 rr + r  and 2  +  1 2 x = 0 and x = B. 1 2 r − r + r Let pxˆ denote the entire distribution given by  =− 1 2 3 2 (P0, px, PB) so that pxˆ is the mixed probability dis-   2   tribution composed of px plus the impulses of areas r − r1 + r2 2 − + rr1 + r2 P0 and PB located at x = 0 and x = B, respectively. In a 2 1 + 2 Alden et al.: General Motors Increases Its Production Throughput 22 Interfaces 36(1), pp. 6–25, ©2006 INFORMS

B Buffer content

Time 0 Repair Failure Repair

pˆ(x) qˆ(x) pˆ(x) QB p(x) q(x) p(x) PB PB P0 P0

x x x 0 B 0 B 0 B

Figure 6: The probability distribution of buffer contents at (any) station repair pxˆ evolves to become the proba- bility distribution of buffer contents at the subsequent failure of any station qxˆ and continues to evolve into the same distribution of buffer contents pxˆ when the station is repaired—a renewal. In this particular example, Station 1 was repaired, and then it failed, and subsequently it was repaired again.

S  Given px, we can derive the equations ˆ 1 2 2 =  S21 + S12 ˆ 2B ˆ 3B 22 + r2 + 2re − 23 + r2 + 2re PB = P0 Given pxˆ , we can derive closed-form expressions ˆ  −   1 2 3 for the various system states, throughput rate, work- and  in-buffer (WIB) and work-in-process (WIP), and sys- ˆ ˆ P0 = 1232 − 3 · 1rr22 − 3 tem time (Alden 2002). We can easily derive results for the case S

B Buffer content Time 0 Repair Failure Repair Failure Repair Failure Repair Failure Repair Failure

Figure 7: For S1 >S2, there are five cases of buffer-content evolution from a station repair to a subsequent failure of any station. Alden et al.: General Motors Increases Its Production Throughput Interfaces 36(1), pp. 6–25, ©2006 INFORMS 23

B

Buffer content Time 0 Station 1 Station 1 Station 1 Station 1 Station 1 Station 1 Station 1 Station 1 fails repaired fails repaired fails repaired fails repaired

B

Buffer content Time 0 Station 2 Station 2 Station 2 Station 2 Station 2 Station 2 fails repaired fails repaired fails repaired

Figure 8: For S1 >S2, there are seven cases of buffer-content evolution from a station failure to its subsequent repair.

case S1

(1) Swap S1, 1, and 1 for S2, 2, and 2, respec- gate station two stations away from the same buffer. tively. We calculated the resulting equations (in aggre- (2) Calculate the results using the equations for the gate parameters), while iteratively passing though all buffers until we satisfied a convergence criterion. case S1 >S2. (3) Swap buffer state probabilities as follows: Convergence is not guaranteed and occasionally is not blocking with starving, full with empty, and filling achieved, even with a small step size. However, it with emptying. is very rare that convergence is too poor to provide (4) Swap the expected buffer contents of the flow useful results. Gershwin (1987) describes this gen- model, EWIB, with B − EWIB. eral approach, called decomposition, to derive expres- We can use this analysis approach (or L’Hospital’s sions for aggregate station parameters (repair rate, rule) to derive results for the case of equal speeds, failure rate, and speed), which we extended to handle S = S . Also, a search for possible zero divides stations with unequal speeds. Due to the similarity 1 2 of approaches, we omit the details of our particular reveals one case, when 2 = 0, for which we can use L’Hospital’s rule to derive useful results (Alden 2002). solution. We also developed a method of using the two- Activity-Network Representation of Job Flow station analysis as a building block to analyze longer An activity network is a graphical representation of lines: we analyzed each buffer in an iterative fash- events or activities (represented as nodes) and their ion while summarizing all stations upstream and all precedence relationships (represented with directed stations downstream of the buffer as two separate arcs). Muth (1979) and Dallery and Towsley (1991) aggregate stations; this allowed us to use the results used a network representation of job flow through a of the two-station analysis. We derived aggregate sta- production line to prove the reversibility and symme- tion parameters that satisfied conservation of flow, try properties of production lines. These networks are expected aggregate station-repair rate, and expected sometimes referred to as sample path diagrams. aggregate station-failure rate. These expected rates To analyze serial production lines, the C-MORE considered the actual station within an aggregate sta- network simulation uses an activity network with a Alden et al.: General Motors Increases Its Production Throughput 24 Interfaces 36(1), pp. 6–25, ©2006 INFORMS

Figure 9: In the activity-networkrepresentation (top) of a closed-loop serial production line (bottom), each node represents a job being processed by a workstation. Each row of nodes corresponds to the sequence of jobs pro- cessed by a specific station, and each column of nodes represents a single job (moved by a specific carrier) as it progresses through all five stations. The arcs in the networkindicate precedence relationships between adjacent nodes (for clarity, we have omitted some arcs from the diagram). Solid arcs represent the availability of jobs (vertical arcs) and the availability of workstations (horizontal arcs). Dashed arcs represent time dependency due to potential blocking between neighboring workstations, where the number of columns spanned by an arc indi- cates the number of intermediate buffer spaces between the workstations. We can calculate the time when a job enters a workstation as the maximum time among nodes with arcs leading into the specific node corresponding to that workstation. The update order for information in the activity network depends on the initial position of job carriers (dashed circles). simple repeatable structure, where each node repre- ity to be processed by a new workstation upon that sents a station processing a job (Figure 9). Concep- job’s completion at the upstream workstation. A diag- tually, the nodes in the network are arranged in a onal arc between nodes in adjacent rows and different grid, so that each row of nodes corresponds to the columns represents the dependency of an upstream sequence of jobs processed by a specific workstation station’s ability to release a job upon downstream and each column of nodes corresponds to a specific blocking, where the number of columns spanned by job as it progresses through the series of workstations. the arc indicates the number of intermediate buffer The arcs in the activity network represent the time- spaces. We use the time dependencies represented by dependent precedence of events. Within a given row, these arcs to analyze the dynamic flow of jobs through an arc between nodes in adjacent columns represents the production line. the time dependency of a specific workstation’s avail- For C-MORE, the activity-network representation ability to process a new job upon that workstation’s provides the conceptual foundation for a very effi- earlier completion of the previous job. Within a given cient simulation that avoids using global time-sorted column, an arc between nodes in adjacent rows repre- event queues and other overhead functions found sents the time dependency of a specific job’s availabil- in traditional discrete-event simulation. Although we Alden et al.: General Motors Increases Its Production Throughput Interfaces 36(1), pp. 6–25, ©2006 INFORMS 25

can view the hybrid simulation algorithm in C-MORE approximate evaluation of tandem queues with finite storage as a generalization of the network-simulation method, space and blocking. Oper. Res. 35(2) 291–305. in our implementation, we use very different rep- Gershwin, S. B. 1994. Manufacturing Systems Engineering. Prentice Hall, Englewood Cliffs, NJ. resentations of the internal data structures for the Goldratt, E. M., J. Cox. 1986. The Goal: A Process ofOngoing Improve- activity network, which improve computational per- ment. North River Press, Croton-on-Hudson, NY. formance when combined with a global event queue Govil, M. C., M. C. Fu. 1999. Queueing theory in manufacturing: for managing the time-synchronized events. A survey. J. Manufacturing Systems 18(3) 214–240. Harbour and Associates, Inc. 1992. The Harbour Report: Competitive Assessment ofthe North American Automotive Industry 1989–1992 . References Troy, MI. Alden, J. M. 2002. Estimating performance of two workstations in Harbour and Associates, Inc. 1999. The Harbour Report 1999—North series with downtime and unequal speeds. GM technical report America. Troy, MI. R&D-9434, Warren, MI. Harbour and Associates, Inc. 2000. The Harbour Report North America Altiok, T. 1997. Performance Analysis of Manufacturing Systems. 2000. Troy, MI. Springer, New York. Harbour and Associates, Inc. 2001. The Harbour Report North America Buzacott, J. A., J. G. Shantikumar. 1993. Stochastic Models ofManu- 2001. Troy, MI. facturing Systems. Prentice Hall, Englewood Cliffs, NJ. Harbour and Associates, Inc. 2002. The Harbour Report North America Chen, J. C., L. Chen. 1990. A fast simulation approach for tandem 2002. Troy, MI. queueing systems. O. Balci, R. P. Sadowski, R. E. Nance, eds. Harbour and Associates, Inc. 2003. The Harbour Report North America Proc. 22nd Conf. Winter Simulation. IEEE Press, Piscataway, NJ, 2003. Troy, MI. 539–546. Harbour Consulting. 2004. The Harbour Report North America 2004. Dallery, Y., S. B. Gershwin. 1992. Manufacturing flow line systems: Troy, MI. A review of models and analytical results. Queuing Systems Li, J., D. E. Blumenfeld, J. M. Alden. 2003. Throughput analysis 12(1–2) 3–94. of serial production lines with two machines: Summary and Dallery, Y., D. Towsley. 1991. Symmetry property of the throughput comparisons. GM technical report R&D-9532, Warren, MI. in closed tandem queueing networks with finite buffers. Oper. Muth, E. J. 1979. The reversibility property of production lines. Res. Lett. 10(9) 541–547. Management Sci. 25(2) 152–158. General Motors Corporation. 1992. 1991 Annual Report. Detroit, MI. Papadopoulos, H. T., C. Harvey. 1996. Queueing theory in manu- General Motors Corporation. 2005a. 2004 Annual Report. Detroit, facturing systems analysis and design: A classification of mod- MI. els for production and transfer lines. Eur. J. Oper. Res. 92(1) General Motors Corporation. 2005b. Company profile. Retrieved 1–27. February 25, 2005 from http://www.gm.com/company/corp_ Viswanadham, N., Y. Narahari. 1992. Performance Modeling of Auto- info/profiles/. mated Manufacturing Systems. Prentice Hall, Englewood Cliffs, Gershwin, S. B. 1987. An efficient decomposition method for the NJ.