Data Driven Manufacturing Risk Assessment for Turbine Engine Programs by Sagar P. Yadama Submitted to the Department of Mechanical Engineering and MIT Sloan School of Management in partial fulfillment of the requirements for the degree of Master of Science in Mechanical Engineering at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY May 2020 ○c Sagar Pandey Yadama 2020. All rights reserved. The author hereby grants MIT permission to reproduce and distribute publicly paper and electronic copies of this thesis document in whole or in part in any medium now known or hereafter created.

Author...... Department of Mechanical Engineering and MIT Sloan School of Management May 8, 2020 Certified by...... David Hardt Professor, Mechanical Engineering Thesis Supervisor Certified by...... Roy E. Welsch Professor, Operations Research Thesis Supervisor Accepted by ...... Maura Herson Assistant Dean, MBA Program, Sloan School of Management Accepted by ...... Nicolas Hadjiconstantinou Chairman, Mechanical Engineering Graduate Committee 2 Data Driven Manufacturing Risk Assessment for Turbine Engine Programs by Sagar P. Yadama

Submitted to the Department of Mechanical Engineering and MIT Sloan School of Management on May 8, 2020, in partial fulfillment of the requirements for the degree of Master of Science in Mechanical Engineering

Abstract Waste and uncertainty can be managed at all levels of a production process from a program level down to a single step in a manufacturing cell, and is done by assess- ing risk. Risk is defined as uncertainty in the ability to deliver the final product of a manufacturing process. The standard method for evaluating risk in aerospace is Manufacturing Readiness Level (MRL). However, these methods were developed for previous generations of turbine engines, and do not represent the capabilities of a modern manufacturing environment. The MRL process for managing overall manu- facturing efficiency of an engine program is highly qualitative, and based on leveraging industry knowledge. The process requires experienced team members to implement, is highly time intensive, and is disconnected from the quantitative metrics that drive performance in the rest of the organization. In an effort to revamp the manufacturing risk assessment process for new turbine engine programs, Pratt & Whitney seeks to develop a standardized data driven risk assessment process to improve the accuracy and accessibility of risk management. This thesis develops a data driven risk assessment process to provide quantitative validation for the legacy risk evaluation method. Cost, quality, and delivery opera- tions metrics are collected and analyzed to build a comprehensive measure of risk for each part in a turbine engine program that can be aggregated to determine total risk for the entire program. In three phases, this project addresses three key challenges faced by the risk management team while also building a comprehensive risk analysis process accessible to anyone in the production hierarchy. Phase one addresses automated identification of operations critical parts inan engine program resulting in complexity reduction of over 80%. Phase two focuses on development of an automated risk visualization dashboard that collects critical operations metrics into a central source and computes a comprehensive risk value for each part in the engine program. Phase three is the construction of standard risk mitigation management tools to transform output of the risk dashboard into useful information for individual manufacturing teams. Ultimately, this research shows that

3 development of standard processes and tools for identifying, analyzing, and mitigating production risk using real time operations data significantly enhances the ability of management and manufacturing teams to understand and mitigate uncertainty in final engine delivery.

Thesis Supervisor: David Hardt Title: Professor, Mechanical Engineering

Thesis Supervisor: Roy E. Welsch Title: Professor, Operations Research

4 Acknowledgments

This thesis was developed from a six-month research fellowship done at Pratt & Whitney in East Hartford, CT. The support of the Operations organization and Manufacturing Readiness Level team made this experience highly productive and enjoyable. I truly appreciate the opportunity to apply teachings from LGO in such a dynamic environment with a dedicated team of talented engineers. I want to thank the sponsor of this project and VP of Operational Excellence, Dave Parry, for constructing and championing this opportunity. To Praba Baptist for providing high level guidance and executive support to this project. To my supervisor, Jason Cote, for his diligent mentorship and for working with me in any capacity to make sure this project was a success. To the MRL team: Daisy Susaya, Sam Lagana, Michele Burnat, Keith Macht, Melissa Malasmas, and others for working closely with me to develop the outputs of this project. To Sean Fitzgerald and his team for providing excellent data analytics support and advice. To LaWanda Scott and others in the cost organization for meeting with me week after week to provide data and guidance. To Travis Gacewski and Kevin Thomas for their help in making the most of the internship process within Pratt & Whitney as well as their candid mentorship regarding life after LGO. To my thesis advisors, David Hardt and Roy Welsch for their time meeting, visit- ing, and advising me in both thesis and life matters. To Thomas Roemer, Ted Equi, and the rest of the LGO staff for their unyielding support and help to ensure my success in this MS/MBA program. I am grateful for this opportunity to learn and apply operations analytics techniques in a rigorous manufacturing environment. To my fellow classmates in the LGO class of 2020 for their friendship and making the last two years some of the best years of my life. To my parents, Gautam Yadama and Shanta Pandey, and my sister, Aishwarya Yadama, for their consistent encour- agement, guidance, and advice. Finally, I would like to thank my girlfriend, Aditi Ramachandran, for her unwavering support and sacrifice to help me get to this point.

5 6 Contents

1 Introduction 17 1.1 Problem Statement and Motivation ...... 17 1.2 Hypothesis ...... 19 1.3 Research Methodology ...... 19 1.4 Scope and Limitations ...... 20 1.5 Thesis Outline ...... 20

2 Company Overview 23 2.1 Company Background ...... 23 2.2 Three Lens Analysis of the Operations Organization ...... 25 2.2.1 Strategic Design Lens ...... 25 2.2.2 Cultural Lens ...... 26 2.2.3 Political Lens ...... 27

3 Literature Review 29 3.1 Lean Manufacturing ...... 29 3.2 Readiness ...... 31 3.2.1 Technology Readiness Level ...... 31 3.2.2 Manufacturing Readiness Level ...... 32 3.3 Manufacturing Risk Assessment ...... 33 3.4 Learning Curve ...... 35 3.4.1 Calculating the Learning Curve ...... 35 3.4.2 Use of the Learning Curve ...... 36

7 3.5 Summary ...... 38

4 Project Context 39 4.1 Timeline of Quantitative Risk Assessment Implementation ...... 39 4.2 MRL Team ...... 40 4.2.1 Responsibilities and Value Proposition ...... 41 4.2.2 Current Major Efforts ...... 42 4.2.3 Engagement with this Project ...... 44 4.3 Key Hurdles in Qualitative Risk Assessment ...... 44 4.3.1 Hurdle 1: Which Parts to Assess? ...... 45 4.3.2 Hurdle 2: Data Collection and Comparison ...... 46 4.3.3 Hurdle 3: Risk Analysis and Implementation ...... 47 4.4 Integration with IT Systems ...... 48

5 Data Analysis and Methods 49 5.1 Phase 1: Operations Critical Bill of Materials ...... 49 5.1.1 BOM Data Sources ...... 50 5.1.2 Development of Ops-BOM Rules ...... 51 5.1.3 Automated Ops-BOM Tool ...... 56 5.2 Phase 2: Quantitative Risk Analysis ...... 56 5.2.1 Key Operations Metrics ...... 57 5.2.2 Connecting Data Sources ...... 58 5.2.3 Calculating Risk Impact ...... 59 5.3 Phase 3: Risk Mitigation ...... 63

6 Results and Discussion 65 6.1 Operations BOM ...... 66 6.2 Risk Assessment Dashboard ...... 69 6.3 Risk Mitigation Tool ...... 74 6.4 Future Work ...... 78

7 Conclusions 79

8 List of Figures

1-1 Root cause analysis for unreported risk on GTF program. Issues with manufacturing during first wave of GTF engines were documented and traced down to determine root cause in Operations organization. The colored boxes document common causes that contributed to program level setbacks. This thesis addresses lack of equal standard work quality across all processes, separation from performance metrics, and lack of a program level risk assessment...... 18

3-1 Methodology for assessing leanness in the manufacturing process. Lean manufacturing sub-groups are analyzed individually using quantitative performance metrics tailored to each sub-group. A leanness level is determined for each sub-group representing the amount of leanness contributed by that group. Leanness of each sub-group is aggregated to derive an overall leanness level of the whole process...... 30

3-2 General risk management process vs. Integrative risk management framework. Legacy risk management process shown on the left utilizes industry expertise in a linear process of identifying and assessing risk. Integrative framework is data driven and iterative. Output data feeds back into risk assessment model to drive continuous improvement. . . 34

3-3 Log-linear learning curves at various learning rates. At larger rates of learning, the risk decreases more for each unit produced. The industry standard learning rate for aerospace is 85%, meaning each additional unit produced should incur 15% less risk than the last...... 37

9 4-1 Issues and actions identified during the MRL reset effort. Four main ares of improvement were identified for the MRL team: standardization of risk assessment tools, integration of MRL into existing processes, development of MRL training, and enforcement of MRL requirements. This thesis addressed standardization of tools and integration of MRL with quantitative metrics...... 42

4-2 Progression and advancement of generations of manufacturing assess- ment methods at Pratt & Whitney. Risk assessment methodology has historically evolved according to changing needs and capabilities at Pratt & Whitney. The Third Generation process developed by this thesis leverages new data collection and analysis capabilities to make risk assessments more quantitative and accessible...... 43

5-1 Program K decision tree for generation of Ops-BOM. The manually sorted training set identified 24% of the EBOM parts as critical forrisk assessment. The automated attribute-based sorting method described by this tree resulted in capturing 31% of the EBOM. Two key attributes were identified: Item-Type of Next Higher Assembly and Part Type.. 53

5-2 Program F decision tree for genereation of Ops-BOMM. The manually sorted training set identified 18% of the EBOM parts as critical for risk assessment. The automated attribute-based sorting method de- scribed by this tree resulting in capturing 19% of the EBOM. Three key attributes were identified: Item-Type of Next Higher Assembly, Item-Type of Subject Part, and Part-Type...... 54

10 5-3 Information flow diagram of current system and the data driven risk analysis process. Cost, quality, and delivery metrics are currently con- sidered separately, and not integrated into MRL assessment process. Metrics are described in different units and hard to compare result- ing in high variability risk assessment results. The proposed solution is to integrate collection and analysis of operations metrics with each other and the MRL assessment to drive consistency and quantitative validation in the risk assessment process...... 60

5-4 Inverse relationship between risk and OTD with delivery as a nor- malization variable. As OTD approaches 0, Risk approaches infinity. Simple normalization of OTD yields extreme results at boundary values. 62

6-1 Automated tool interface for generating Ops-BOM. EBOM hierarchy is represented using collapsible and expandable rows. An assignment button automatically generates the right-most column indicating which parts have been selected for the Ops-BOM. "1" indicates selection while "0" indicates omission from the Ops-BOM...... 66

6-2 Tabular output of Ops-BOM tool. All Ops-BOM selected parts are collated into a single display. EBOM unique IDs are carried over in the left-most column to maintain ability to drill down into sub-assemblies and details...... 67

6-3 Progression of reduction of BOM after implementation of each rule in Program K. After implementing all four rules, the resulting Ops-BOM is 84% smaller than the original EBOM...... 68

6-4 Progression of reduction of BOM after implementation of each rule in Program F. After implementing all four rules, the resulting Ops-BOM is 88% smaller than the original EBOM...... 70

11 6-5 Output of Risk Assessment Dashboard. Ops-BOM parts are displayed alongside cost, quality, and delivery metrics. These metrics are used to calculate incurred risk displayed in the right-most column. Ops-BOM parts can be expanded to drill down into sub-assembly data...... 71

6-6 Program K Pareto chart of risk contribution for 20 riskiest parts. The majority of risk in the program is incurred by the top part. This accurately reflects historical performance, and exposes high risk that was previously underrepresented in the legacy risk assessment process. 72

6-7 Program F Pareto chart of risk contribution for 20 riskiest parts. Sim- ilar to Program K, this Pareto chart shows majority of risk incurred by the highest risk part. The high quantitative risk value indicates immediate action necessary to mitigate issues with final engine delivery. 73

6-8 Progress of fully automated risk dashboard. Final risk dashboard is integrated with all data sources allowing for real-time data updates. Connection to part attribute data adds ability to create and modify groups of parts. Shown in red is a Risk Pareto of a custom created group of parts. Shown in green is a Risk Pareto of the full program Ops-BOM...... 74

6-9 User interface for generation of the theoretical learning curve. Variable learning curve parameters are displayed in tan colored cells. Parame- ters are laid out in a context aligned to performance goals of manufac- turing teams for ease of use...... 75

6-10 Action plan input including cost impact, timeline, and unit incorpo- rated. This sample action plan shows interface that manufacturing management can use to plan and track improvement efforts on high risk parts. Ownership, timeline, and unit incorporation all provide visibility and accountability in implementation of the risk mitigation effort...... 76

12 6-11 Final risk mitigation curve output with all four benchmarks repre- sented. Comparison of the learning curve, program targets, action plan, and actual data gives a comprehensive understanding of risk mit- igation progress. Action plan and actuals can be benchmarked against risk reduction due to standard learning, and targets can be continu- ously re-evaluated based on current state...... 77

7-1 Information flow diagram depicting inputs and outputs of Quantitative Risk Assessment process. Shows movement of data from independent sources to Ops-BOM and final dashboard as well as final validation interaction with current MRL process...... 80

13 14 List of Tables

3.1 DODI specifications for all 10 manufacturing readiness levels. . 33

15 16 Chapter 1

Introduction

This thesis seeks to make manufacturing risk analysis more efficient, accurate, and accessible by leveraging recent advances in real time data collection and automation. In this context, risk is defined as uncertainty in the ability of a manufacturing pro- cess or program to deliver the final product. Current risk assessments rely heavily on industry experience and exhaustive qualitative checklists. With the advent of op- erations data collection infrastructure in the form of SAP and Business Warehouse among others, this project explores how to incorporate quantitative validation into the risk assessment process. Available data sources are evaluated, identifying key metrics used to develop a semi-automated process by which anyone in the production hierarchy can gain an understanding of risk prioritization at a part level for turbine engine programs. This first chapter focuses on presenting the problem statement, hypothesis, research methodology, scope, and outline of the rest of this document.

1.1 Problem Statement and Motivation

Due to advancements in turbine engine technology, Pratt & Whitney is experiencing an increase in demand for new engines. In the past, Pratt & Whitney operated in a high margin industry with steady or contracting demand. With the advent of innovative turbine technologies including the Geared Engine (GTF) and a surge in demand, Pratt & Whitney is evaluating how to maximize production capacity

17 and reduce systematic risk. Initial production of the GTF faced unforeseen challenges and risks in the manu- facturing processes. Integral modules of the GTF reported high rates of quality issues resulting in severely reduced engine throughput and decreased capacity to meet grow- ing demand. The aerospace industry standard risk assessment system is Manufac- turing Readiness Level (MRL). MRL was championed by the Depart- ment of Defense (DoD) and subsequently integrated into aerospace manufacturers themselves. The MRL approach is driven by assessments, which must be carefully controlled to provide useful output [14]. The legacy MRL system at Pratt & Whitney was unable to adequately identify and mitigate risk during production of GTF. The Operations Organization conducted a root cause analysis demonstrated in Figure 1-1 to identify potential failures in the legacy MRL process.

Figure 1-1: Root cause analysis for unreported risk on GTF program. Issues with manufacturing during first wave of GTF engines were documented and traced down to determine root cause in Operations organization. The colored boxes document common causes that contributed to program level setbacks. This thesis addresses lack of equal standard work quality across all processes, separation from performance metrics, and lack of a program level risk assessment.

The MRL team was able to identify several root cause issues that lead to subop- timal manufacturing processes passing through readiness milestones during risk as-

18 sessments. Three main root causes will be addressed by the research in this project: inadequate standardization of processes, segregation of risk assessment from quantita- tive performance metrics, and inability to aggregate risk at a program level. Pratt & Whitney is investing in a redefinition of the MRL and manufacturing risk assessment processes to more accurately and quickly identify high risk areas in future turbine programs.

1.2 Hypothesis

This thesis hypothesizes that standard processes and tools for identifying, analyzing, and mitigating production risk using real time operations data significantly enhances the ability of management and manufacturing teams to understand and react to operations inefficiencies. This project is focused on reducing risk assessment cycle- time, complexity, and inaccuracy, as well as demonstrating how teams throughout the production hierarchy can incorporate risk analysis to drive decision making.

1.3 Research Methodology

Research for this thesis came from several different sources. Interviews with mem- bers of the operations organization shed light on customer needs throughout the production hierarchy. Cost, quality, and delivery operations data were collected and analyzed to provide insight into current risk management processes and to imple- ment a quantitative risk model into the final dashboard. This risk analysis dashboard was supplemented by risk mitigation tools to manage risk reduction efforts, provide real time performance, and benchmark against the calculated optimal learning rate. The rest of this thesis will evaluate the process of implementing the quantitative risk analysis and mitigation process and discuss the results.

19 1.4 Scope and Limitations

Part of Pratt & Whitney’s effort to strengthen the MRL process is administrative and out of the scope of this project. Figure 1-1 shows two major issues to be main- taining MRL process rigor, and integrating MRL in the engineering design process. This project will focus specifically on integration of operations metrics into the risk assessment process and development of standard tools for risk analysis and mitiga- tion. The key operations metric fields used to test the hypothesis were cost, quality, and delivery. Upon validation of the hypothesis, Pratt & Whitney intends to expand the processes developed to encompass all engine programs and incorporate additional metrics as more operations data becomes available. The risk assessment processes can be broken down into three distinct phases:

∙ Identification of Operations Critical Bill of Materials – an automated process by which each program can develop a comprehensive list of parts critical to understanding aggregated production risk of the entire engine.

∙ Production Risk Analysis – the collection of key operations metrics into a data dashboard and method of combining these metrics into a single value represent- ing the production risk of each operations critical part in an engine. This phase also includes analysis and visualization of risk at a part level to aid in resource prioritization and enhancing understanding of program level risk exposure.

∙ Risk Mitigation Implementation – standard tools and processes for implement- ing risk mitigation efforts once parts contributing critically high production risk to the engine program are identified.

1.5 Thesis Outline

Chapter 2 provides an overview of Pratt & Whitney. Specifically, this chapter exam- ines the Operations Organization and the MRL team from a political, cultural, and strategic design lens.

20 Chapter 3 provides an overview of literature regarding current research in manu- facturing risk assessments from a variety of industries, and the use of different oper- ations metrics to generate a singular risk value. Chapter 4 discusses the context of this project including administrative prepara- tion from Pratt & Whitney, the MRL team’s current objective within the Operations Organization, and this project’s role in furthering that objective. Chapter 5 lays out the three main phases of this project. This chapter focuses on data collection, and methods of analyzing data to overcome each of the three hurdles targeted by each phase. Chapter 6 provides a discussion of the results of all three phases, and an evaluation of the overall risk assessment process developed over the course of this thesis. Chapter 7 outlines future recommendations for continuing this project and dis- cusses lessons learned during development of the final quantitative, semi-automated, risk assessment process.

21 22 Chapter 2

Company Overview

This chapter gives background information from Pratt & Whitney motivating the objective of the Operations Organization, MRL team, and this thesis. This chapter discusses how the introduction of new technologies such as the GTF program line and resulting demand increase played a significant role in the current direction of Pratt & Whitney Operations. Moreover, this chapter conducts a three-lens analysis of the Pratt & Whitney Operations Organization focusing on the role of the MRL team. In depth organizational analysis is important to understanding the constraints in the risk assessment process, and significantly shaped the implementation of this thesis.

2.1 Company Background

Pratt & Whitney became a major player in the aerospace industry in the 1920s when Frederick Rentschler helped pivot Pratt & Whitney into manufacturing to produce a novel air-cooled engine of Rentschler’s design [9]. Since then, Pratt & Whitney has had major success with research and development and has been a leader in aerospace innovation. With steep capital investments required for new aerospace technologies, Pratt & Whitney programs have long development lifecycles with variable demand. The operations organization must manage production rates of all programs to meet demand based on the lifecycle of the individual program. This means the operations organization requires precise understanding of production

23 risk to effectively meet demand. With new commercial and military technologies providing significant performance advancements to Pratt & Whitney turbine engines, demand forecasts are high resulting in rapid increase in production rate.

Over the last 30 years, Pratt & Whitney has invested $10 Billion in the develop- ment of the Geared Turbofan Technology (GTF). While a conventional turbine engine has the fan and turbine spin at a single speed, GTF introduces a gear that allows the two sections of the engine to spin at custom speeds. Using the geared turbofan technology, engines can be designed to capture higher fuel efficiency by allowing the fan to expand and spin slower than the turbine. Continuing the spirit in which the company began, Pratt & Whitney has been able to offer a leap in performance due to novel turbine technology [4]. The GTF engine provides a reduction in commercial aircraft fuel burn of more than 16 percent, reduction of regulated emissions of more than 50 percent, and 50% noise reduction [1].

In 2016 Pratt & Whitney had orders from Airbus, Bombardier, Embraer and Mit- subishi for 7,000 engines valued at more than $18 billion. Forecasting further increase in demand, Pratt & Whitney ramped up production of the GTF aiming to double by 2020 [12]. With performance enhancing technological innovations in the GTF program, production continues to ramp up with an increased need to mitigate risk in the form of cost, quality, or delivery deficiencies within the current manufactur- ing processes. Although the initial wave of GTF production was largely successful, the program faced major challenges regarding manufacturing processes that passed through legacy MRL assessments but failed to maintain cost, quality, and delivery performance metrics. Due to manufacturing risk being passed down the production chain, certain parts of the GTF had lower than average final yield rates. Had risk assessments been tied to performance metrics, perhaps quality issues further up the production chain might have been mitigated to maintain high production capacity [12].

The operations organization is responsible for ramping up production rate, and the MRL team manages the method by which each module center assesses and tracks their manufacturing capability in the form of manufacturing readiness. Readiness

24 and risk share an inverse relationship such that processes with higher manufacturing risk are rated lower in readiness. This thesis will focus on implementing a process of assessing manufacturing risk for each part of a turbine engine program tied di- rectly to operations metrics meant to measure performance. Using real time data, hidden production costs were uncovered and aggregated to develop a comprehensive understanding of risk for a turbine engine program.

2.2 Three Lens Analysis of the Operations Organi- zation

The MRL team is located at Pratt & Whitney headquarters in East Hartford, CT. However, customers of MRL output exist all over the company including the Colum- bus Forge Disk (CFD) plant in Columbus, GA and the Pratt & Whitney plant in North Burwick, ME. During this time of change and redefinition of the risk assess- ment process, the MRL team went through a strategic, cultural, and political trans- formation to more effectively execute to a new standard, and support the sharp ramp in production. The following analysis will examine the organization of the MRL team during this project from three different lenses to later provide insight into how the results from this thesis can most effectively be implemented at Pratt & Whitney.

2.2.1 Strategic Design Lens

This project was an effort conducted in conjunction with the MRL team within the Operations organization. Pratt & Whitney employs a matrix structure to organize functions and programs. The MRL team is part of the central Operations organi- zation that does not have ownership over a single program or section of the engine. The MRL team supports both commercial and military programs, and maintains ownership of standard processes utilized by individual programs. Module centers have ownership over certain sections of the turbine engine, and are the end users of the standardized risk mitigation tools developed by this thesis. Risk assessments

25 are carried out by the MRL team, but the responsibility of implementing improve- ments and meeting production rate falls to the individual module centers. Therefore, strategically aligning incentives between the module centers and the MRL team by integrating performance metrics with risk analysis maximizes effectiveness of the risk assessment process. This thesis seeks to provide a comprehensive risk assessment process to increase the ease of handoff from MRL team to module center. The MRL team is intended to act as a support group helping to enable accurate and productive usage of standard risk assessment processes across all production facilities within Pratt & Whitney. Incorporating the needs of the module centers as well as the MRL team was integral to developing a set of risk analysis tools to support the entire production process.

2.2.2 Cultural Lens

Pratt & Whitney is a large manufacturer in the aerospace industry, and has been a large player in this space for almost 100 years. The culture at Pratt & Whitney is shaped largely by the aerospace industry and the lifecycle of Pratt & Whitney’s major programs. Existing in the aerospace industry and working with both commercial and military contracts breeds a conservative culture of slow and deliberate change. Pratt & Whitney takes a lot of pride in its support of the military, and understands the high stakes in which it operates. Therefore, accountability in systems and processes is high such that all are owned and regulated by teams throughout the enterprise. Every bit of data is restricted and requires permission to access from the owner of the source folder. The production and technology lifecycle of turbine engines at Pratt & Whitney contributes significantly to the culture at the company. Currently, Pratt & Whitney is in a period of high growth after completing research and development of the GTF and finally ramping up production. With that comes increases in hiring, and growthin entry level positions as new programs are formed to fulfill contracts. The workforce at Pratt & Whitney East Hartford is getting younger, and is transitioning from a time of internal technology research to that of full scale production. Introduction of a younger

26 workforce into existing teams is also driving the push toward data driven solutions that don’t require years of industry knowledge to provide insight. However, the conservative nature of changing processes in an aerospace company impedes the rush to provide data driven solutions to every problem. This thesis provided a method by which risk could be analyzed and mitigated using standard metrics based tools. The developed process is initially being used as a validation tool alongside the qualitative MRL assessment. Building trust in new tools and processes is important in a company with such a high impact product.

2.2.3 Political Lens

The champion of this project and person with the most political capital involved with this effort is the VP of Operational Excellence. He transitioned to Pratt & Whitney in 2018 from the automotive industry, which is the manufacturing industry leader in production efficiency. With the projected increase in demand due to the GTF,Pratt & Whitney would be operating in a similar environment to an automotive company in which throughput efficiency would be the driving performance metric. The mandate of the VP of Operational Excellence is to expand capacity as cost effectively as possible and get Pratt & Whitney ready to meet the influx of demand. In order to effectively manage overall capacity and predict production rates, the Operations organization must effectively assess the readiness of their separate module centers to satisfactorily produce different sections of the engine. While the Opera- tions organization owns the process for assessing the readiness of the module centers, the module centers are comprised of manufacturing teams responsible for meeting performance metrics and submitting to MRL assessments. The two organizations have competing incentives as the module centers view the MRL assessments as ob- stacles not tied directly to meeting performance milestones. Part of this effort was to help align incentives between the manufacturing teams and the process management teams.

27 28 Chapter 3

Literature Review

Assessing manufacturing risk based on operations metrics has a great deal of litera- ture, and most of the research has been conducted within the fundamental topic of Lean Manufacturing. In order to discuss risk calculation methods, the initial concept of continuous evidence based process improvement in Lean Manufacturing must be explored. This chapter also discusses research regarding readiness level beginning with Technology Readiness Level, and focusing on Manufacturing Readiness Level. Finally, literature on different manufacturing risk frameworks, and their use of real time data will provide an insight into how a quantitative risk assessment could benefit Pratt & Whitney.

3.1 Lean Manufacturing

As information technology applications become more powerful in manufacturing, com- panies in all manufacturing based industries strive to harness data and analysis tech- niques to reduce cost and maximize capacity. The core philosophy of lean manu- facturing is the production of goods with less of every resource including less waste, human effort, manufacturing space, and investment in tools. Lean manufacturing is focused on reducing waste in the manufacturing process, beginning with the 7 wastes of the Toyota Production System. These 7 wastes are overproduction, excess inven- tory, waiting, transportation, unnecessary motion, over-processing, and defects [13].

29 The focus of lean manufacturing is to uncover hidden costs in the manufacturing process, and continuously drive manufacturing efficiency.

The method by which lean manufacturing has been managed is the predecessor to the later technology readiness level and manufacturing readiness level assessments. In “Leanness Assessment Tools and Frameworks”, Oleghe and Salonitis discuss the concept of “Leanness” in manufacturing organizations. Essentially, “Leanness” is the measure of the performance of lean manufacturing processes [3]. This metric is an aggregation of all contributing factors depending on the specific scenario. The fol- lowing example methodology in Figure 3-1 specifies three other sub-groups of Lean Manufacturing: Just in Time (JIT) management, Quality Management (QM), and Total Productive Maintenance (TPM). Figure 3-1 shows a hierarchical structure with the methodological steps for assessing “Leanness” in such a manufacturing process [3].

Figure 3-1: Methodology for assessing leanness in the manufacturing process. Lean manufacturing sub-groups are analyzed individually using quantitative performance metrics tailored to each sub-group. A leanness level is determined for each sub-group representing the amount of leanness contributed by that group. Leanness of each sub-group is aggregated to derive an overall leanness level of the whole process.

30 3.2 Readiness

Readiness is a term introduced by NASA in the 1960s in the form of a “flight readi- ness review” as a way of codifying pre-flight checklist rituals meant to determine if a flight vehicle was fit to fly. During the Apollo program, NASA was faced withthe challenge of designing systems utilizing technology still in development. In response, NASA created the “technology readiness review”, a process meant to evaluate emerg- ing technologies and provide a comprehensive value for their development progress. From there, the concept of readiness spread to manufacturing, logistics, and many other disciplines [6].

3.2.1 Technology Readiness Level

Technology Readiness Level (TRL) was developed by NASA as a method of evaluating emerging technologies for use in solutions to space challenges during the late 1960s and early 1970s. The purpose of introducing a maturity assessment was to have a common scale by which to compare drastically different technologies with different benefits and detriments. Moreover, TRL was easy to understand for all organizations involved in the development of a technically complicated product allowing the manufacturing companies to easily communicate with investors, customers, and other stakeholders [5]. The nine TRL levels can be divided into three distinct phases:

∙ TRL 1-3: Early conceptual development generally conducted in university or corporate research laboratory.

∙ TRL 4-6: Medium maturity technology generally involving early prototyping, but still primarily conducted in a laboratory research environment.

∙ TRL 7-9: Focused on final maturity of technology to be transitioned froma research environment to a production environment.

While TRL was integral to NASA’s technology development and evaluation pro- cess, the system suffered from shortcomings related to the communication and infor- mation capabilities of the time. The TRL system suffered from the “Valley of Death”

31 concept which demonstrated that many technologies stagnated in development dur- ing the handoff from R&D to production. While TRL focused on evaluating thecore technology, there was no criteria regarding scalability of the technology and readiness from a manufacturing standpoint [5].

3.2.2 Manufacturing Readiness Level

Due to the success of NASA’s TRL framework, offshoots in disciplines other than technology development gained traction. Perhaps the most important offshoot is manufacturing readiness, which can be used to describe new manufacturing tech- nologies or manufacturing new products using known manufacturing techniques [5]. Similar to TRL, MRL is a level based framework that evaluates an entire manufactur- ing process based on a checklist customized to provide insight into the specific process being assessed. Table 3.1 displays the overarching definitions of each readiness level as specified by Department of Defense Instruction (DoDI) 5000.02 [5]. Although DoDI 5000.02 provides high level guidelines for MRL states, the detailed assessment method is generally developed internally specific to the application in question. Generally, each level is paired with an in depth checklist that must be fulfilled before a man- ufacturing process can pass on to the next level. Checklists can contain qualitative and quantitative questions, however, the end result must be an integer value on the MRL scale.

The problems with MRL stem from the founding principle of an assessment based management system. The readiness level system was developed by customers to a manufacturing process who required a simple ten-point scale system to easily un- derstand the progress of a complex system. Ultimately, the output of all MRL assessments is qualitative. While quantitative information can be used as a crite- rion between levels, the levels themselves have no quantitative meaning. Relying on a qualitative system to assess a manufacturing process held to quantitative performance standards may not provide the most useful measure of readiness.

32 Table 3.1: DODI specifications for all 10 manufacturing readiness levels.

3.3 Manufacturing Risk Assessment

Manufacturing risk has been studied in many industries with a particular interest in assessment of how much risk exists in a process. Several frameworks have been developed to assess manufacturing risk using a variety of metrics and scales. In “Process-oriented risk assessment methodology for manufacturing process evaluation”, Shah builds a risk measurement model based on the probability and consequence of a risk event among other factors. The paper concludes by presenting a process-oriented risk assessment methodology to rank manufacturing processes [8]. Assessing manufacturing risk stems from the desire to reduce final delivery uncer- tainty in a manufacturing process. Cost increases, quality issues, and manufacturing slowdowns all decrease the ability of a manufacturing process to produce the final product at the required rate [8]. Ultimately, effective risk management leads to max- imizing value from each manufacturing process. Additional research conducted on an integrated approach to risk assessments for

33 manufacturing processes defines a framework for assessing risk. The typical risk management lifecycle follows the following steps: Risk identification, Risk assessment, Risk mitigation action, and Risk monitoring. Accurate risk assessment is integral to management of risk otherwise mitigation efforts will not produce the desired effect. Figure 3-2 details the integrated framework next to a general risk management process as proposed by Neghab in “An integrated approach for risk-assessment analysis in a manufacturing process using FMEA and DES” [7].

Figure 3-2: General risk management process vs. Integrative risk management frame- work. Legacy risk management process shown on the left utilizes industry expertise in a linear process of identifying and assessing risk. Integrative framework is data driven and iterative. Output data feeds back into risk assessment model to drive continuous improvement.

The general construction of the integrated risk assessment framework combined with the fundamental assessment method as given by the assessment for “Leanness” shown in Figure 3-1, will become the basis for the quantitative manufacturing risk assessment process developed by this thesis for Pratt & Whitney. While Figure 3-1 demonstrated the breakdown approach to aggregating sub-assessments into a com- prehensive value for a large system, Figure 3-2 deals with integrating quantitative methods into a qualitative assessment process. Modelling, measurement, and miti- gation of risk can all be driven by data to improve assessment accuracy and increase the parallel nature of the processes rather than the traditional serial approach.

34 3.4 Learning Curve

Learning in operations can be defined as the process of becoming more efficient by gaining experience, skills, and proficiency through the lifecycle of a process. Since learning occurs over a period of time, the process can be represented with a curve [11]. In operations and manufacturing, the time variable is often in the form of production milestones. Since learning is a measure of efficiency growth, presumably gained from repetition of the manufacturing process, it makes sense to measure efficiency as a function of units produced. Learning curve theory was developed in aerospace in which T. P. Wright first observed that the average cost of producing a two-seater airplane decreased at a constant rate each time the production quantity doubled [11]. The original log-linear curve proposed by Wright has been modified in several ways to fit applications in many different industries.

3.4.1 Calculating the Learning Curve

Several methods of implementing a learning curve exist. The original curve, still considered the industry standard method in aerospace, is a log-linear relationship between cost and time. Every doubling in time should result in a constant rate of cost decrease. The log-linear relationship captures the idea that early learnings and improvements produce large initial gains, but as the process approaches the ideal state, each additional unit provides marginally decreasing learnings. The original learning curve is defined as follows,

푏 푚푢 = 푎푢 (3.1) where,

푡ℎ 푚푢 is marginal cost or risk associated with 푢 unit,

푎 is cost or risk of producing first unit,

35 푢 is total quantity of units produce,

푏 is the learning index defined by 푏 = 푙표푔2퐿, and

퐿 is the learning rate of the process.

This specific log-linear equation with 푚푢 describing marginal cost was developed by James Crawford after observing similar learning in airframe production at Lock-

heed [2]. While 푚푢, 푎, and 푢 are factors associated with the process itself, extensive research has been conducted on learning factors, 푏 and 퐿 resulting in standard values for various industries with aerospace standardized at an 85% learning rate, 퐿 [10]. Figure 3-3 shows several sample learning curves built using Equation 3.1 with various learning rates, 퐿. As 퐿 decreases in magnitude, more learning occurs per unit produced. A 90% learning rate, 퐿 means that each subsequent unit will have 90% of the cost of the previous unit. Essentially, costs should decrease by 1 minus L or 10% for each unit. Therefore, in Figure 3-3 we can see that 퐿 = 60% shows a roughly 80% reduction in cost or risk over the first 10 units produced whereas 퐿 = 70% required 30 units of production to achieve the same reduction in cost.

3.4.2 Use of the Learning Curve

Learning curves have a wide variety of uses in manufacturing applications. Primar- ily, learning curves are used as a theoretical benchmark to track cost reduction in manufacturing processes over the lifecycle of a production program. However, learn- ing curve methodology is applicable to a wide range of scopes within manufacturing processes. Learning curves can be used both as high level management tools for an entire program or for more granular analysis such as measuring expected yield from a single manufacturing cell [11]. Moreover, each parameter in the learning curve can be examined to understand the state of the subject process. For example, learning analysis can be implemented such that the output is marginal cost or risk for a target production unit given an industry

36 Figure 3-3: Log-linear learning curves at various learning rates. At larger rates of learning, the risk decreases more for each unit produced. The industry standard learning rate for aerospace is 85%, meaning each additional unit produced should incur 15% less risk than the last. standard learning rate. However, the equation can also be altered to output required learning rate given a starting cost and target cost for a target unit. This way the learning curve can be used to benchmark a subject process against industry standard learning as well as derive optimal learning rate for a certain desired beginning and end state. This thesis implements the learning curve framework as the basis for standardized risk mitigation tools in the final steps of the quantitative risk assessment process. Once risk is evaluated on a part basis, risk tracking and mitigation is done against a learning curve to provide a benchmark comparison for actual results.

37 3.5 Summary

Lean manufacturing, readiness, and manufacturing risk been researched extensively, providing a solid foundation by which to explore further. Several frameworks for evaluating manufacturing processes ranging from purely qualitative to highly data driven. As data integration in manufacturing processes increases, the methods by which manufacturing risk can be assessed can become more powerful. This thesis utilizes the frameworks and learnings of past research in manufacturing risk assess- ments, to develop a quantitative process for Pratt & Whitney to improve accuracy and support mitigation efforts.

38 Chapter 4

Project Context

The Manufacturing Readiness Level (MRL) team set the groundwork for a project that would last and had a plan for integration of results into core Pratt & Whitney systems. This project specifically addresses three main challenges the MRL team currently faces with the qualitative risk assessment process: determining which parts to assess, collecting and comparing operations data, and mitigating identified risk. This chapter discusses the role of the MRL team and the context in which this project fits within the operations organization.

4.1 Timeline of Quantitative Risk Assessment Im- plementation

In the beginning of 2019, the MRL team was tasked with redefining the process by which manufacturing readiness level would be assessed at Pratt & Whitney. The MRL team started constructing the infrastructure needed to test the hypothesis of this thesis, that by standardizing tools in the risk assessment and mitigation pro- cesses, end users would be able to react more quickly to operations problems. While standardization of processes and a redefinition of the qualitative assessment began, the MRL team did not have capacity to explore quantitatively driven risk processes, and their integration with the current risk assessment process. Data was included as

39 part of the qualitative assessment process. An operations data lake was developed in a data visualization and integration system called QLIK, providing access to specific dashboards containing operations metrics. This project began in June 2019, and kicked off the MRL team’s effort to evaluate metrics and build a quantitative model for assessing risk to be used alongside the industry standard MRL assessment. The MRL team began by exploring three key operations metrics: cost, quality, and delivery. Although three key metrics were iden- tified, the initial research of this thesis focused heavily on all different permutations of cost, quality, and delivery and each one’s role in determining a final production risk value. Metrics came from data sources across the enterprise, all under different access restrictions. The initial effort was to connect relevant data from different parts of the Pratt & Whitney data network to the final tool. In parallel, the MRL team had capacity constraints regarding the number of parts that could be analyzed for each given engine program. This project developed an automated process for determining an operations critical bill of materials based off research into previous programs, and interviews with industry experts in the operations organization. The final step was to develop standard tools for mitigating risk once discovered. As the MRL team had already built standardized tool infrastructure for managing readiness levels, this project drove development of standard tools utilizing the output of the quantitative risk assessment. Detailed in this chapter is the role of the MRL team in the operations organization, and the specific challenges this project is meant to address in their newly structured manufacturing readiness level assessment process.

4.2 MRL Team

The Manufacturing Readiness Level (MRL) team exists as a support entity in the central Operations organization. Manufacturing teams throughout Pratt & Whitney interface with the MRL team to validate their processes to certain readiness levels at different stages in the lifecycle of a program. MRL is a device that upper management uses to get a high level perspective of risk distribution throughout an engine program.

40 Moreover, MRL is used as an easy to understand method of communicating program progress to customers. The MRL team owns the processes used to determine MRL for a manufacturing process, and continuously seeks to make the process more accurate and easier to use.

4.2.1 Responsibilities and Value Proposition

Historically, the MRL team has played a support role to the individual manufacturing teams when it comes to readiness. The MRL team provides frameworks and checklists for conducting readiness level assessments, and the module centers that own the engine sections are responsible for managing manufacturing processes to pass certain levels at certain milestones in the production program.

Although module centers are accountable for their own performance, the MRL team is held accountable for program level performance issues due to inaccuracies in MRL assessments. Early on in the GTF program, module centers were able to waive certain MRL requirements in an effort to ramp production. Moreover, the legacy MRL assessment processes were not standardized resulting in a wide variety of interpretations of the same checklist requirements. The GTF program faced manufac- turing challenges due to propagation of risk through the production process until the risk was demonstrated when production rate goals could not be achieved. The Oper- ations organization placed an emphasis on compliance with MRL team processes, and tasked the MRL team with renovating the MRL assessment process with standard tools and automated processes. Development of enterprise wide, proficiency based standard work was key to aligning the performance metrics of manufacturing teams with MRL assessment output and increasing the accessibility of risk assessments to every stakeholder in the organization. While legacy assessments focused on evalua- tion, proficiency based standard work would be action oriented with activity based assessments directly driving risk mitigation efforts.

41 Figure 4-1: Issues and actions identified during the MRL reset effort. Four main ares of improvement were identified for the MRL team: standardization of risk assessment tools, integration of MRL into existing processes, development of MRL training, and enforcement of MRL requirements. This thesis addressed standardization of tools and integration of MRL with quantitative metrics.

4.2.2 Current Major Efforts

To execute on its new mandate, the MRL team began by identifying four major areas for improvement: lack of standardized MRL assessment tools, lack of MRL integration into existing processes, no formal MRL training, and a large amount of MRL process noncompliance as shown in Figure 4-1. Lack of standardized MRL assessment tools in the legacy MRL process allowed for different teams to interpret qualitative judgements differently. Furthermore, con- fusion regarding specific methods of assessing MRL among the manufacturing teams required substantial support from the MRL team. In some cases, risk was inadver- tently passed down the production chain due to optimistic qualitative judgements artificially advancing the readiness level of several processes. Integrating MRLinto existing processes controls for incorrect advancement through the MRL process as MRL assessments can be validated against performance metrics. This thesis focuses on addressing these first two root issues by developing a standard framework for manufacturing risk analysis heavily driven by real time operations data. The major overhaul for the MRL team in the beginning of the reset was the

42 Figure 4-2: Progression and advancement of generations of manufacturing assessment methods at Pratt & Whitney. Risk assessment methodology has historically evolved according to changing needs and capabilities at Pratt & Whitney. The Third Gen- eration process developed by this thesis leverages new data collection and analysis capabilities to make risk assessments more quantitative and accessible. development of activity-based standard tools to improve the MRL assessment process. This represents a new generation of MRL assessment beyond the legacy MRL method of using a general DoD questionnaire. Figure 4-2 shows the generational progression of the MRL process at Pratt & Whitney.

Initially, the G1-Review was focused on third party review rather than self- assessment and continuous improvement. Historically, MRL assessments have been broken up into individual “threads” that represent sub-groups at which granular MRL assessments are made. The second generation MRL assessment process transitioned from an internal review-based system to a comprehensive questionnaire developed by the DoD. Although this process improved on the first generation assessment, the second generation required large amounts of time to evaluate all 400+ questions. Additionally, this process did not help integration between disciplines, and criteria interpretation remained variable.

Following the initial challenges of the GTF program, the MRL team launched the third generation of risk assessments pivoting to an activity based MRL system. Devel- opment for this system began in the beginning of 2019. The activity based MRL sys-

43 tem is based off over 15 different threads that each will have its own standard process for evaluating MRL. Similar to the “Leanness” framework examined in the literature review [3], each thread is evaluated for MRL independently. Finally, the MRL scores of all the threads are combined into a single MRL score resulting from the individual activities done at the thread level. Threads are focused on specific challenges asso- ciated with each manufacturing process. Examples include sourcing strategy thread such that at higher MRLs sourcing agreements are in place and production ready, and a yield curve thread which advances MRL based on certain manufacturing yield milestones. The results of this thesis were part of the effort to revamp MRL to generation three including standardized tools for risk mitigation and integration of performance metrics into the risk assessment process. This thesis works in tandem with the qualitative MRL process to provide a quantitative risk assessment process able to validate MRL results, and eventually be fully integrated with the MRL process.

4.2.3 Engagement with this Project

The MRL team is championing this project as part of the overall effort to redefine manufacturing risk analysis in the Operations organization. In addition to standard- izing tools, the MRL team is working on integrating operations metrics and real time data into the risk analysis process. Currently MRL determinations are made via qual- itative checklists occasionally using quantitative thresholds to check different boxes. This thesis provides a risk measurement from purely quantitative manipulation of metrics producing a data driven measure of risk that can be compared against MRL analysis to truly evaluate readiness of manufacturing processes.

4.3 Key Hurdles in Qualitative Risk Assessment

In addition to researching methods of developing an accompanying quantitative method of risk assessment, this project aims to assuage three specific hurdles the MRL team faces with the current risk assessment process: part selection for assessments, data

44 collection and comparison, and standard methods of managing risk mitigation efforts.

4.3.1 Hurdle 1: Which Parts to Assess?

The qualitative MRL process takes a significant amount of labor. The second gener- ation process was a DoD questionnaire of over 400 questions. Each question required intimate knowledge of the manufacturing process, such as input sourcing strategy, cost data, and tooling timeline. The third generation activity based assessment is also quite time intensive. The standard tools reduce risk of error, but acquiring the necessary data and status information requires interfacing with several different sys- tems and significant amounts of processing time. Since MRL is generally doneata part level, this means that conducting an MRL assessment for each part in a pro- gram generally takes 40 hours of total labor. Since turbine engine programs contain anywhere between 7500 and 9000 parts, it would be unreasonable to expect all parts to go through the MRL assessment process. MRL assessments are done regularly through the lifecycle of a program, so each part would have to be assessed several times. The Operations organization attempted to address this issue by creating a process called the Manufacturing Critical Parts List (MCPL). The intent of the MCPL was to define a standard process by which a list of parts critical to the readiness ofanengine program can be generated from the full bill of materials (BOM). The MCPL process was able to identify key parts in engine programs that require special attention. However, the process was still qualitative and the generation of the MCPL focused solely on the chosen parts. Since the process relied mostly on manual judgements of qualitative and quantitative criteria, once separated from the global BOM, the MCPL was unable to provide insight into omitted assemblies that might pose unexpected risks. Rather than completely narrow the scope of MRL assessments by selecting a subset of parts, this project conducted interviews with different stakeholders, and developed an automated process by which each pick level part was collected into an Operations Bill of Materials (Ops-BOM). To maintain visibility into every part in an

45 engine program, the Ops-BOM is displayed at the pick level part, but has the ability to expand and collapse subassemblies so each part can be accessed if needed. With the global BOM hierarchy contained within the Ops-BOM, the MRL team interacts with the engine through a manageable list of critical parts, but does not lose visibility into the rest of the engine should an unexpected risk occur.

4.3.2 Hurdle 2: Data Collection and Comparison

While metrics and quantitative inputs have always been an integral part of the MRL process, these metrics are often compared against thresholds to make a qualitative judgement of readiness level. Moreover, metrics are collected independently of each other and are often curated by different teams adjacent to the Operations organiza- tion. Operations data exists in several sources around the enterprise. Due to the sensi- tivity of this data, access requires application and justification. However, the Pratt & Whitney IT system is such that custom data dashboards can be developed to display information from different sources in one central location. This project worked with different stakeholders to identify critical metrics required to compute risk ofeach part in an engine program. Working alongside an IT analytics team, the process of connecting each data source to a central dashboard was set in motion. While identifying and connecting data was able to assist the MRL team in access- ing information for risk assessments, a quantitative model must utilize these metrics as quantitative inputs to output a single value for risk incurred to Pratt & Whit- ney. In a way, the qualitative MRL process uses industry experts to compare data from different fields to make a final assessment. However, in the spirit ofstandard processes that remove subjective judgement from contention, operations metrics with different units, measured in different ways had to be converted to common unitsand combined in some manner. This research focused on three key metrics: cost, quality, and delivery. To deliver a final risk impact value for a part, the resulting methodology must be able to compare dollars, defect rate, and on-time-delivery percentage. Designing the output as a calculated risk value at a part level means that risk can

46 be aggregated in any part group including the full engine. Conducting a qualitative MRL assessment requires a custom detailed process for each part. Metrics and data are judged based on the specific part being assessed. The final MRL value is not easily comparable to another part that might have had a different team of people conducting the assessment. Moreover, aggregating the part readiness to judge readiness for the entire engine program is exceptionally difficult. The quantitative risk output proposed by this thesis supports aggregation and division as all metrics come from the same central sources and have been normalized in the same way. As data is becoming more accessible at Pratt & Whitney, the MRL team seeks to integrate real time data with standard MRL processes. Not only does creating a central data repository allow quicker qualitative assessments, it provides opportunity to develop automated numerical methods of calculating risk that can be compared and validated via the traditional MRL process.

4.3.3 Hurdle 3: Risk Analysis and Implementation

Following the MRL assessment process, the MRL team also supports manufacturing teams with risk mitigation. If the MRL is lower than it should be, the manufacturing team determines which threads are having the biggest impact, and works on devel- oping a solution. For example, if the Yield curve MRL is bringing down the overall assessment, the manufacturing team will conduct detailed research into why yield is lower than expected. The MRL team often faces challenges with supporting MRL advancement when manufacturing teams are also evaluated with separate operations performance metrics. Standard MRL tools tend to be checklists with qualitative questions that help characterize the state of the process from different perspectives. However, since MRL assessments are state-based, and manufacturing systems are dynamic, certain risks may be overemphasized while others may be hidden when evaluating with a checklist. As part of the quantitative risk assessment process, this thesis develops a method- ology for standard dynamic tools that aid in risk mitigation efforts. Once risk is identified, these tools offer a management framework that can be implemented by

47 anyone in the production hierarchy. For example, mitigation of overall risk at the part or engine level is managed by allowing manufacturing teams to track theoretical risk, actual risk, and mitigation action plan within one tool. Due to the automated nature of the quantitative risk assessment system, risk can be recalculated after each iteration or set of iterations of the manufacturing process to dynamically track how expected improvements in the process compare with actual results.

4.4 Integration with IT Systems

While the quantitative risk assessment process was being developed, an IT analyt- ics team was working alongside to incorporate design and analysis decisions into a final QLIK dashboard within the Pratt & Whitney operations data lake. QLIKis the data analytics and integration system by which employees can access real time data connected to continuously updated central data sources. This project focused on researching relevant metrics and developing methods for building an operations critical list of parts, assessing risk using key operations metrics, and implementing standard tools by which operations risk can be mitigated. Each week, the results from this research drove development of the centralized QLIK dashboard including selec- tion of operations critical parts, key operations metrics, and method of calculating comprehensive risk value. This ongoing development was important in implementing the results of this project within established Pratt & Whitney systems, immediately providing value to the operations organization.

48 Chapter 5

Data Analysis and Methods

This chapter discusses the three phases of this project spanning the entire lifecycle of engine program risk assessment. Each phase targets a different hurdle in the cur- rent risk assessment process, and all together create a standardized semi-automated process for managing risk for a turbine engine program.

5.1 Phase 1: Operations Critical Bill of Materials

As discussed in Section 4.3.1, the first phase of the quantitative risk assessment methodology proposed by this thesis is to develop a list of parts, called the Ops-BOM at which risk is assessed and reported. This list of parts should be representative of the entire engine, and maintain hierarchical links with the global BOM so any other part of interest can be accessed if needed. An initial effort to develop a similar list called the Manufacturing Critical Part List (MCPL) fell short due to underrepresentation of parts that were later determined to have been critical. While the MCPL was a robust process for determining the vast majority of critical parts, its reliance on qualitative differentiation factors for sorting parts into the MCPL meant that generation of the MCPL could not be automated, and creation of the MCPL did not preserve the connections between MCPL parts and the rest of the global BOM. Once a part was removed from the MCPL, MRL visibility into that part would be severely reduced. The Ops-BOM process improved

49 on the MCPL by implementing the selection bases to align with digital information available in the raw global BOM, thereby significantly reducing time to generate the Ops-BOM while also preserving visibility into every part in the engine.

5.1.1 BOM Data Sources

Generating a bill of materials in Pratt & Whitney is not as trivial as looking up a part list to a certain engine configuration. Teams from around the company used specialized bills of materials to fit their own needs. The cost, MRL, industrial engi- neering, and manufacturing teams all use different BOMs. This section will discuss the different BOMs relevant to the creation of the Ops-BOM as well as othersources of data needed to build the final automated process. Most BOMs throughout the enterprise are generated as a subset of some global BOM that contains every part in an engine. This global BOM is called an Engineering BOM or EBOM. The EBOM is a comprehensive list of parts in any engine program. Each program has a unique EBOM. The process by which this phase of the project generated an Ops-BOM was by using a set of rules to choose a subset of parts within the EBOM. However, the EBOM also contains information related to part hierarchy such as next-higher-assembly and subassembly data. While only a subset of parts were chosen to be displayed in the Ops-BOM, the hierarchical connections from those parts to the rest of the EBOM were kept intact. Although the EBOM is a comprehensive list of finished parts in an engine, a different BOM called the MBOM exists for manufacturing teams. During theman- ufacturing process of such a complex product, intermediate parts are created and passed on between manufacturing teams. For example, a final disk in the engine may have gone through several manufacturing processes such as forging and machining before reaching its final state. Along the way, the manufacturing teams must assign some label to the semi-finished part in order to track its progress through the produc- tion chain. While the EBOM documents the final forged and machined part number, the MBOM tracks intermediate states in between forging and machining steps along with raw materials needed for manufacturing teams to operate successfully. Since this

50 project is focused primarily on assessing risk at the manufacturing level, visibility into MBOM unique parts is important. After generating the subset of Ops-BOM parts, this project constructed a digital connection to the MBOM so that risk calculations can be done for any part in the production process. While researching different sources of manufacturing data throughout the enter- prise, a spares inventory model was discovered that generated an Oracle table match- ing EBOM parts with the MBOM line items. The model had been developed to keep track of final part inventory for spares while taking into account the semi-finished parts currently tied up in the manufacturing process. This inventory model provided the final data source needed to populate semi-finished part data within eachfinal part in the Ops-BOM. After gaining access to the data needed to generate a use- ful Ops-BOM, the team discussed how to determine a set of rules for defining the Ops-BOM.

5.1.2 Development of Ops-BOM Rules

Each engine program generally contains between 7500 and 9000 parts. However, not each part is important to scrutinize when assessing aggregate risk of an engine program. For example, MRL assessments don’t necessarily need to be conducted for every fastener in an engine, but each set of fasteners has a line item in the global EBOM. The goal of the Ops-BOM process was the use attribute data about each part present in Pratt & Whitney BOM databases to determine what set of rules could reasonably result in a representative list of parts from an engine program. Two main engine programs were chosen to test and validate the Ops-BOM pro- cess referred to in this thesis as Program K and Program F. Although several factors about each part were available to use in the decision model, the initial approach to rules development centered around emulating industry experts. A major module was chosen to develop an industry knowledge-based list of parts gathered from inter- views with stakeholders across the Operations organization. During the stakeholder list discussion, it became clear that the goal of this effort was to be able to identify assembly-ready parts. Assembly-ready parts can be conceptualized as the final prod-

51 uct of a manufacturing process before final assembly. The difficulty is determining which parts are considered assembly-ready, because there is no single BOM level or definition with which to filter a BOM. Using industry experts to comb throughsec- tions of BOMs and determine assembly-ready parts gives us a way to computationally determine what data might have emulated the same result.

The initial emulation model used a hand-crafted list of assembly-ready parts along with part attribute data to construct a decision tree. Success of the decision tree output was measured by minimizing deviations from the hand-crafted list. Figures 5-1 and 5-2 show the initial decision trees generated from the hand crafted industry knowledge BOMs.

Examining the decision tree outputs, we can determine the deciding data factors that contributed to providing a solution as close as possible to the hand-crafted list of assembly-ready parts. Two important attributes were found to be "Item Type" and "Part Type". The item type is a classifier describing the complexity of the part in question. The part type is a description the function of the part such as fastener, supporting structure, or turbine component. This thesis uses letter codes to anonymously represent different part types for intellectual property reasons. Program K resulted in a simple decision tree with only two deciding variables to determine whether a part enters the Ops-BOM or not. These two factors were part type as well as the item type of the next higher assembly. The Program F decision tree included a few more decision points, but only added one additional decision variable, the item type of the part itself.

The results of the initial decision tree analysis helped drive the discussion regarding the official set of rules to be used for the Ops-BOM. Although we could have attempted to use one of the decision trees, the ownership system for the Ops-BOM process at Pratt & Whitney is not heavily integrated with data systems. The goal of this phase was to develop a method of generating an Ops-BOM that displayed a limited list, but had visibility into the rest of the BOM. A primary focus of this effort was to align rule definitions with concrete data so that the process could be automated given accessibility to data.

52 Figure 5-1: Program K decision tree for generation of Ops-BOM. The manually sorted training set identified 24% of the EBOM parts as critical for risk assessment. The automated attribute-based sorting method described by this tree resulted in capturing 31% of the EBOM. Two key attributes were identified: Item-Type of Next Higher Assembly and Part Type.

The first branch in both trees had to do with the next higher assembly’s itemtype. The different types of items are detail, assembly, and non-machined assembly (NMA). Details tend to be smaller parts consisting of a single piece of sheet or machined metal. Assemblies are generally groups of details or other assemblies that have been fastened together. For instance, several sheet metal details might join together to create an assembly bracket. Finally, NMAs are collections of assemblies and details that are not necessarily one single part. For example, a NMA could be a set of turbine blades necessary to build one engine. However, the blades are not joined together into a single physical assembly. Often, NMAs are groups of parts or large sections of the

53 Figure 5-2: Program F decision tree for genereation of Ops-BOMM. The manually sorted training set identified 18% of the EBOM parts as critical for risk assessment. The automated attribute-based sorting method described by this tree resulting in capturing 19% of the EBOM. Three key attributes were identified: Item-Type of Next Higher Assembly, Item-Type of Subject Part, and Part-Type. final engine. While details and assemblies are tracked closely at the manufacturing level, NMAs are mostly used for final assembly and tracking engine level metrics. The first decision for both trees was to eliminate any parts with next higher assemblies that are classified as assemblies. This suggests that the most significant rulefor determining if a part is assembly-ready is if the parent assembly to the subject part is classified as NMA. Intuitively this makes sense because NMAs tend to be collections of parts ready to be put into the final frame of the engine. The parts that roll up into NMAs are single units, but are ready to serve their purpose in the greater assembly. These parts can be details and sub-assemblies, but once they enter NMA,

54 the fabrication is usually finished. The second common point between the two trees in Figures 5-1 and 5-2, is the use of Part Type as a significant differentiator. Both trees created a rule to eliminate all parts of part type AN, AS, MS, and ST. These are all standard part types that refer to items not unique to Pratt & Whitney. These are generally fasteners, nuts, and other standard equipment that is not manufactured specifically for the engine. Since these parts exist in every level of the BOM, simply checking whether NMA was the parent item type would not have worked. While Program K only used these two rules, Program F created a few more branches associated with the item type of the subject part itself. An interesting point is that the item type is used twice, once to include NMA and detail items, and another time to exclude remaining NMA items. The team examined the training data to determine why the decision tree would be outputting these decisions, and found that in most cases, NMAs were not included in the Ops-BOM because they existed outside of the scope of manufacturing teams. Aggregating the analysis results, the team developed a set of rules as follows:

0. Start with full EBOM

1. Remove all parts that do not roll up to NMA (non-machined assemblies)

2. Remove all NMAs

3. Remove all parts of type: AN, AS, MS, ST, NAS, and DS

4. Remove all duplicate part numbers

The final rule list was heavily influenced by the decision tree analysis of theman- ually generated BOM, but included some additional rules deemed necessary during additional deep dive into the data. “DS” was found to be another part type not necessary in the Ops-BOM, and while doing a final pass the team discovered line duplications in the raw data. Rules 3 and 4 were modified to reflect the update. The rules begin indexing at 0 because Rule 0 is the definition of the input. Rules 1-4 act to select parts for the Ops-Bom.

55 5.1.3 Automated Ops-BOM Tool

Since the ruleset for generation of the Ops-BOM was built to support automation, this thesis explored methods for building such a tool. One of the major shortcomings of the MCPL was the heavy amount of manual investment needed to generate a list of parts. Several of the criteria did not have central data sources, so had to be judged going through the BOM part by part. The rules for generating the Ops-BOM were based in available data and matching these attributes with ultimate desired output. An Excel tool was developed as part of this thesis to decrease the processing time of generating an Ops-BOM. Additionally, the value proposition of this Ops-BOM partly lay in maintaining links to overall BOM hierarchy as well as incorporating MBOM unique parts into the fold. To do this, the final Excel tool utilized SQL queries to connect directly from Excel to Oracle databases containing the BOM information. Tool users only needed to interface with buttons in Excel, so the integration would not require significant systems and programming knowledge and be accessible toevery stakeholder in the organization. The final automated tool and the value it contributes to the MRL team will be discussed in detail as part of the results section.

5.2 Phase 2: Quantitative Risk Analysis

The main phase of this thesis was developing the quantitative risk analysis process for measuring risk in the chosen Ops-BOM parts. Operations metrics are critical to understanding risk, and the team decided to focus specifically on three areas: quality, cost, and delivery (QCD). Within QCD were many different metrics calculated in various ways. Stakeholder research was conducted to determine the ultimate need from a risk impact value. Management and other customer teams drove a preference for a total cost value that representing the impact of each part in an engine to the program. Quality and delivery metrics were converted into costs incurred by Pratt & Whitney. Combined with cost data, the total incurred cost by the company was compared against target cost to calculate risk added by each part. This section details the different operations metrics, data source connections, and risk calculations done

56 in order to construct the automated risk dashboard.

5.2.1 Key Operations Metrics

Quality, cost, and delivery are the major performance metrics used by manufacturing teams. Section 2.2, three lens analysis, discussed how incentives from the manufac- turing teams were not aligned with MRL and the rest of the Operations organization. Integrating operations metrics directly connected to manufacturing performance was done in order to develop a common incentive structure to maximize compliance with MRL. Moreover, these three metrics have historically been the driving metrics used by operations management to make strategic decisions. To ensure smooth integration of the final product, these three critical metrics were piloted with plans toexpand the model further. Quality control is a central tenant of operations and lean manufacturing. A process producing faulty product to rate is effectively producing nothing. Pratt & Whitney measures quality from several different perspectives. The purpose of this project is to determine the amount of extra cost incurred to Pratt & Whitney due to quality issues. Therefore, the most important metric is Cost of Poor Quality (CoPQ). CoPQ is calculated by summing individual costs due to quality issues in the manufacturing process. Rework and repair costs, scrapping costs, discounting costs, and even credit from reuse is captured in CoPQ. Each quality instance is tracked, and the risk assess- ment dashboard aggregates CoPQ instances by part number to determine added cost due to quality issues in each part. However, not only do quality problems increase costs for repair and scrap, they take administrative cost to propagate through the quality mitigation system. The most straightforward way of measuring administrative cost due to quality is- sues is the number of quality tickets generated called QNs. When parts are inspected, any defective parts are written up with a QN. QNs may have many line items within them due to multiple defects or if a batch of quality parts are collated into the same ticket. An alternative metric to QNs is number of line items in total, QNLI. Since each QNLI is processed in a standard method while treatment of QNs varies based

57 on the number of QNLI inside, QNLI is used as the metric by which to calculate administrative costs of incurring quality based risk. Measuring cost requires two metrics: target cost and actual cost. Target cost is the benchmark against which the combined cost of the process will be compared. Additional cost is considered the cost of risk to Pratt & Whitney from that part. The costs in these metrics are expected costs from the company to produce each part. These metrics include material costs, cost of labor, overhead costs, and transportation costs among others. The target cost represents the incurred cost acceptable to meet final delivery goals. The difference between the actual cost and target cost represents the additional risk due to cost for each part. If the cost to produce a part is larger than anticipated, final engine production may be affected. At Pratt & Whitney the module centers work with cost teams to determine target costs for each program and track actual costs. This current method omits quality and delivery factors which is why this thesis seeks to combine the three perspectives. Delivery was challenging to characterize for the team because delivery metrics can come in many different forms. On time delivery fill rate (OTD) is generally delivered as a percentage, but delivery can also be tracked from the customer perspective using average days late. This metric is critical to the success of the risk model because cost and quality cannot capture any delivery issues that could cripple the whole process. Even if a module center is producing to rate with minimal quality defects and is meeting cost targets, the company will incur costs if the output is not being delivered on time. Since most of the customer relationships in the Operations organization are internal to Pratt & Whitney, OTD was chosen as the delivery metric to provide a scalable term for the final risk equation that could be agreed upon by all stakeholders.

5.2.2 Connecting Data Sources

The most challenging aspect of building the quantitative risk assessment process was connecting the different data sources into one centralized location. As mentioned in Section 4.4, an IT team was working alongside this thesis to integrate results into the development of a centralized data dashboard. Part of this effort was identifying

58 systems that housed raw data, and connecting them to the data dashboard. Although a few systems required external approval with timelines beyond the scope of this project, most data were accessed via SQL queries to Oracle tables within Pratt & Whitney. Quality and delivery data were easily accessible via a central repository of data created as part of Pratt & Whitney’s connected factory initiative. Cost data was more difficult to access, as it existed in a separate system restricted to the costteam. Normally, members of the MRL team would contact cost team members to receive an Excel export. However, utilizing real time data is a driving factor of this thesis, so efforts to directly connect into the cost system are underway. This thesis initiated direct connection into the cost system, but the timeline of fully integrating real-time cost data fell outside the scope of this project. The pilot dashboard developed by this thesis uses Excel exports as a proof of concept of the usefulness of the tool itself.

5.2.3 Calculating Risk Impact

The preferred output metric from customers in the Operations organization is cost. Measuring risk in the form of cost shows how much cost in a given time period was incurred due to risk in the process. A risk-less process will incur no additional cost to the business besides the strict cost of producing the product. The real time operations metrics listed above can be combined to give an assessment of risk. Figure 5-3 shows how the original organization structure separated operations metrics from the risk assessment, but integration of metrics through a combined calculation will provide benefits to both processes. The challenge of combining the metrics was in the units of each metric being differ- ent. Initially the team considered the possibility of collecting historical impact data, and generating a training set similar to what was done for creation of the Ops-BOM. However, this kind of measurement had never been done before, and attempts to cre- ate training values were heavily biased by the methods used to retroactively calculate historical values. Therefore, further efforts were spent working with members in cost, quality, and management to understand the dollar impact of different events to the

59 Figure 5-3: Information flow diagram of current system and the data driven risk anal- ysis process. Cost, quality, and delivery metrics are currently considered separately, and not integrated into MRL assessment process. Metrics are described in different units and hard to compare resulting in high variability risk assessment results. The proposed solution is to integrate collection and analysis of operations metrics with each other and the MRL assessment to drive consistency and quantitative validation in the risk assessment process.

60 Pratt & Whitney business.

CoPQ was already in dollar value and by itself represents an incurred price that the company paid due to the quality risk in the manufacturing process. This value could be simply summed into the total risk value without having to undergo unit conversion. Business costs were traced with the cost team regarding processing of QNLIs. The total labor cost, processing cost, and final closeout cost was calculated

per QNLI and aggregated into a proprietary conversion factor, 퐶푞.

Cost is also already represented in dollar value, and is one of the other reasons for representing risk as an incurred cost. Target cost is subtracted from the actual cost because it is not considered a risk by the definition provided in this thesis. Actual cost contributes positively to risk, as this ideally is cancelled out by the target cost.

Perhaps the most difficult metric to integrate is on time delivery. Initial discussions focused around using delivery as a normalizing percentage. For example, if total impact without considering delivery is $100, then a 50% rate of OTD should result in a final total impact double the original at $200. Conceptually this means thatthe incurred risk of a process would approach infinity as the OTD% approached zero as shown in Figure 5-4.

Using delivery as a normalization tool produces unrealistic results close to the boundaries of the relationship. As OTD approach zero, total risk could face a mul- tiplication factor of 100 if OTD for a process were 1%. Initial uses of normalization found several parts showed an unreasonably high risk value due to the inflationary effects of simple normalization. From interviews with cost and manufacturing teams, cost of delivery tended to fluctuate around the actual cost of the part. Parts that were more expensive to produce also were the costliest to the manufacturing process when delayed. Intuitively, parts are more expensive because of the need for a lot of processing steps and value added through the manufacturing chain. These are also the parts that are critical to deliver to internal customers on time. Rather than nor- malizing the risk against OTD, the team implemented a delivery cost scaled against actual cost to the OTD, shown in Equation 5.1.

61 Figure 5-4: Inverse relationship between risk and OTD with delivery as a normaliza- tion variable. As OTD approaches 0, Risk approaches infinity. Simple normalization of OTD yields extreme results at boundary values.

퐷푒푙푖푣푒푟푦퐶표푠푡 = 퐴푐푡푢푎푙퐶표푠푡(1 − 푂푇 퐷) (5.1)

Scaling delivery cost by actual cost means that at low OTD, the cost incurred due to delinquent delivery approaches doubling the entire cost of the part. However, if the OTD is close to 1, the delivery cost incurred is negligible.

Combining quality, cost, and delivery into a single model for calculating risk is shown in the following overall equation:

62 푅푖푠푘퐼푚푝푎푐푡 = 퐶표푃 푄+퐶푞*푄푁퐿퐼+퐴푐푡푢푎푙퐶표푠푡+퐴푐푡푢푎푙퐶표푠푡*(1−푂푇 퐷)−푇 푎푟푔푒푡퐶표푠푡 (5.2) The risk impact equation is a combination of the quality, cost, and delivery metrics identified and connected to the risk dashboard. The central dashboard contains the raw input data, and also logic to calculate risk for each part in the engine given Equation 5.2. Working directly with quantitative values in a versatile unit such as cost allows for easy aggregation of risk in any grouping. To determine risk impact of a module consisting of several parts, the total risk is simply the sum of the individual part risks. This allows for program level aggregation so that upper management can compare risk across programs to drive high level funding decisions.

5.3 Phase 3: Risk Mitigation

The final step in the risk analysis process is enabling the manufacturing teams toman- age risk mitigation via standard tools. The goal of the tool is to help manufacturing teams reduce risk impact to zero. A learning curve framework was implemented to produce a theoretical risk reduction path as units produced increased. A program level version of the tool can be used to track the cost of producing an entire engine given the aggregated risk impact of the individual parts showing the theoretical re- duction in risk after each additional engine is produced. Discussed in Section 3.4, the learning curve takes a log-linear form, and in this case the exact form is displayed in the following equation,

푌 = 퐾푋푁 (5.3)

where,

푌 is the target cost at MRL 10,

퐾 is current total cost to produce a unit,

63 푋 is number of Engines produced at MRL 10,

푁 is the learning index defined by 푁 = 푙표푔2퐿, and

퐿 is the learning rate of the process with industry rate, 85% [10]

Alongside the theoretical learning curve, actual data collected from the risk dash- board can be entered into the tool to track performance against theoretical learning. While the learning curve captures manufacturing improvements associated with rep- etition and streamlining of operations, the purpose of risk assessment analysis is to determine prioritization for implementation of risk mitigation efforts. The risk mit- igation tool allows for input of discrete action plans with forecasted step effects on engine cost. Interfacing with the risk assessment portion of the process, the miti- gation tool tracks theoretical, actual, and expected costs taking into account each team’s individual action plan. The output allows manufacturing teams to estimate costs and compare their performance to industry standard learning. The next chapter will discuss the results of each phase of this thesis after imple- menting the methodology laid out in this chapter.

64 Chapter 6

Results and Discussion

This chapter discusses the results of the three phases of this project: developing an Ops-BOM, building a risk assessment system, and implementing risk mitigation tools. The combined result of the three phases is a risk assessment and mitigation process that can be used by anyone in the production chain. Consider performing a quanti- tative risk assessment for an entire engine program. Developing a list of critical parts can be done by generating an Ops-BOM for the program. Since the original BOM hierarchy is still embedded within the output, all parts remain accessible. Inputting the critical part-list into the risk assessment dashboard begins an automated process of collecting operations data about the subject parts, and computing a risk impact value. After reviewing aggregated risk of the program, the risk mitigation tool is implemented to track progress of program wide risk reduction benchmarked against the industry standard learning curve.

While management implements this process at a high level, manufacturing teams can be conducting their own risk assessment and mitigation efforts at a module, assembly, or even detailed part level. Centralized data sources, and quick processing time means teams can conduct several assessments in parallel. The quantitative process shown in this chapter ties together the different risk frameworks laid out in Chapter 3 to produce a new method of risk management.

65 6.1 Operations BOM

The purpose of this phase in the risk assessment process was to develop a method for reducing the global BOM into a manageable list of parts critical to understanding the risk of producing the engine. Chapter 5 discussed the methods by which rules for determining the Ops-BOM were chosen, and the data driving those decisions. The Ops-BOM rules were integrated into an automated tool that receives a raw global BOM as input and outputs the list of parts comprising the Ops-BOM. Figure 6-1 shows the user interface for the Ops-BOM tool with Pratt & Whitney proprietary information omitted.

Figure 6-1: Automated tool interface for generating Ops-BOM. EBOM hierarchy is represented using collapsible and expandable rows. An assignment button automat- ically generates the right-most column indicating which parts have been selected for the Ops-BOM. "1" indicates selection while "0" indicates omission from the Ops- BOM.

Beginning on the far left of Figure 6-1 is the hierarchical grouping of all the parts in the global BOM. In this case, all hierarchies are expanded, allowing visibility into ev- ery part of the engine program. These assemblies and subassemblies can be collapsed to only display the Ops-BOM parts for ease of conducting high level assessments. The main method of interfacing with this tool is the button by which the Ops-BOM

66 assignment is generated. After engaging the button, the Ops-BOM assignment column is created on the far right, and a new sheet only containing Ops-BOM values is also created as shown in Figure 6-2.

Figure 6-2: Tabular output of Ops-BOM tool. All Ops-BOM selected parts are collated into a single display. EBOM unique IDs are carried over in the left-most column to maintain ability to drill down into sub-assemblies and details.

The Ops-BOM rules developed earlier exist within the automated spreadsheet. The ideal user interaction is pasting a copy of the global BOM directly from central system, clicking the button to generate the Ops-BOM assignment, and receiving the Ops-BOM list in hierarchical form and simple tabular form. The whole process takes roughly 2 minutes which is a sharp decrease in required processing time. Before implementation of the above interface, BOM creation required a large amount of time as teams had to consider each part in a global BOM using qualitative judgements to sort those parts into the new BOM. Although general rules make the process slightly faster, BOM creation is often treated as an activity to be done once in the lifecycle of a program due to the amount of resources necessary. Moreover, this tool benefits other manual BOM creation processes, because it creates a more manageable starting point than beginning with the global BOM. While this tool is the result of research

67 Figure 6-3: Progression of reduction of BOM after implementation of each rule in Program K. After implementing all four rules, the resulting Ops-BOM is 84% smaller than the original EBOM. into the MRL team’s needs, the built in rules can be changed and modified as the needs of the team change. Aside from resource benefits and reduction in processing time, the ultimate value of the Ops-BOM is to reduce a list of 8000 parts as much as possible, without com- promising visibility of overall health of the engine. Therefore, the metric of usefulness is the amount of reduction from the original BOM, assuming that research has been conducted to verify the BOM has not been over-pruned. Figures 6-3 and 6-4 show part reductions as percentages of the total number of parts for Programs K and F, respectively plotted against progression of rule imple- mentation. For reference, the Ops-BOM rules are:

0. Start with full EBOM

68 1. Remove all parts that do not roll up to NMA (non-machined assemblies)

2. Remove all NMAs

3. Remove all parts of type: AN, AS, MS, ST, NAS, and DS

4. Remove all duplicate part numbers

The Ops-BOM tool reduced the BOM size by 86% in Program K and 88% in Program F. Since the Ops-BOM process is specifically designed to target assembly- ready parts, the reduction in BOM size does not reduce visibility into the BOM. Achieving such a large reduction with only four rules kept this phase of the risk as- sessment process easy to use and easily transferable between teams in the Operations organization.

6.2 Risk Assessment Dashboard

Integration of operations metrics and computation of risk impact occurs in an auto- mated data dashboard being developed by the adjacent IT team. While this project managed direction of the dashboard, and coordinated connections with data sources, the implementation timeline was beyond the scope of this thesis. Therefore, a semi- automated dashboard was created to conduct risk assessments in the interim. This section will discuss the deployment of the semi-automated risk dashboard as well as the progress of the fully automated QLIK dashboard. Similar to the Ops-BOM process, this risk assessment dashboard is semi-automated requiring some data inputs into the tool. While the tool interfaces directly with cer- tain Oracle databases, some data is restricted requiring manual input. The risk dashboard integrates the Ops-BOM, but providing the logic to include connections from the Ops-BOM parts to MBOM unique semi-finished parts as discussed in Sec- tion 5.1. Hence, while data is displayed at the Ops-BOM assembly-ready part, the BOM hierarchy can be expanded to view risk metrics for every subassembly, detail, semi-finished part, and material required to produce that assembly-ready part.

69 Figure 6-4: Progression of reduction of BOM after implementation of each rule in Program F. After implementing all four rules, the resulting Ops-BOM is 88% smaller than the original EBOM.

Figure 6-5 shows output of the Ops-BOM input into the risk dashboard. The dashboard output displays the parts from the Ops-BOM alongside operations metrics associated with quality, cost, and delivery. Each metric is displayed in its original form, and the calculated added risk in dollars due to that metric is also shown. The final column shows the computed total added risk from the combination ofthekey operations metrics as expressed by Equation 5.2. Due to logic in the Ops-BOM and dashboard tools, the Ops-BOM parts maintain connection to all parts in the EBOM and MBOM. As shown in Figure 6-5, each part can be expanded to drill down into subassemblies of Ops-BOM parts. The semi-automated risk tool provides value to the MRL team in the form of easy data access, ability to compare operations metrics between parts at a program level, and a

70 Figure 6-5: Output of Risk Assessment Dashboard. Ops-BOM parts are displayed alongside cost, quality, and delivery metrics. These metrics are used to calculate incurred risk displayed in the right-most column. Ops-BOM parts can be expanded to drill down into sub-assembly data. quantitative value representing risk added by any part in the engine. This quantitative risk assessment process is being developed alongside the industry approved qualitative MRL process. However, integration of operations metrics into driving risk decisions can be done immediately while full automation of the process is still being validated. Conducting program level risk assessments is much more accessible due to the centralized location of data, and ability to aggregate quantitative risk values. Figure 6-6 and 6-7 show Pareto charts of risk in Programs K and F, respectively. Part numbers have been omitted, and are presented in order of risk added to the programs. In both charts, the 20 parts constituting the most added risk in each program are listed. The Pareto charts of risk for both programs show that risk is heavily skewed to the top few risky parts. In Program K, the riskiest part represents 50% of the added risk to the entire program, and in Program F, the riskiest part represents 65% of the added risk to the entire program. As a qualitative validation of the results of the risk assessment, the major drivers of risk were evaluated for Programs K and F given that both subject programs have already been launched in some capacity. While actual risk impact values could not be retroactively determined, the qualitative ordering of

71 Figure 6-6: Program K Pareto chart of risk contribution for 20 riskiest parts. The majority of risk in the program is incurred by the top part. This accurately reflects historical performance, and exposes high risk that was previously underrepresented in the legacy risk assessment process. top risk parts can be checked against parts that caused issues in initial production. As both Program K and F are GTF derivatives, the parts that drove initial challenges with production of the GTF engine were checked against the top risk parts output by the qualitative risk model. In both cases, the riskiest part constituting over 50% of added risk in both programs was the part that caused the most production issues during initial fabrication of the GTF engine. This suggests that had the quantitative risk assessment process been in place during initial production of GTF, the risk output would have flagged this specific part as a major source of risk earlier in the program. This dashboard significantly improves the ability of the MRL and other stakeholder teams to view operations data, reducing the amount of time necessary to collect and act on information.

While the semi-automated dashboard connects most data sources, and automat-

72 Figure 6-7: Program F Pareto chart of risk contribution for 20 riskiest parts. Simi- lar to Program K, this Pareto chart shows majority of risk incurred by the highest risk part. The high quantitative risk value indicates immediate action necessary to mitigate issues with final engine delivery. ically includes BOM hierarchy data it is still a local tool with manual steps in the process. Being developed in tandem was a fully automated dashboard integrated with Pratt & Whitney’s major data analytics system. The semi-automated dashboard was built as a proof of concept for the methodology behind the final risk dashboard.

Although timeline of development of the automated risk dashboard was longer than the duration of this project, a significant portion of the automated dashboard was constructed. Figure 6-8 shows the most recent iteration of the final dashboard showing risk measurements for each part in a sample BOM.The final dashboard has been designed to be customizable and dynamic. Groups of parts can be analyzed separately, as shown with the red and green parts in Figure 6-8. The risk Pareto output is similar to the semi-automated Pareto, but integration with the business analytics interface allows for greater user customization.

The second phase of the risk assessment methodology laid out by this thesis is collecting data and using that data to drive risk calculations. Another goal of this

73 Figure 6-8: Progress of fully automated risk dashboard. Final risk dashboard is integrated with all data sources allowing for real-time data updates. Connection to part attribute data adds ability to create and modify groups of parts. Shown in red is a Risk Pareto of a custom created group of parts. Shown in green is a Risk Pareto of the full program Ops-BOM. project is to build a risk assessment process that is accessible to people throughout the production hierarchy. Implementation of the semi-automated risk dashboard cut time to collect data on a list of parts from several days to a matter of minutes. Moreover, having an integrated tool reduces strain on teams providing data while also encouraging manufacturing teams to use data to drive decision making.

6.3 Risk Mitigation Tool

The final phase of the quantitative risk assessment process is implementation ofstan- dardized risk mitigation tools to enable manufacturing teams to manage risk once it is identified. Chapter 5 discussed the methodology behind risk management andits connection to learning in aerospace. A learning curve framework to a dynamic tool that can be used to manage risk mitigation efforts of any scope from cost reduction of a single manufacturing cell to risk management of an entire engine program. The four main trends compared in managing risk are theoretical learning, action

74 plan, actual cost, and target cost. In the context of employing the tool to manage high level risk for a whole engine program, these four values tracked over production of each engine sheds light into Pratt & Whitney learning rate against industry learning as well as feasibility of hitting a target when compared against theoretical learning. The theoretical learning curve is generated as given in Equation 5.3, and the user interface is shown in Figure 6-9. The final tool implemented the log-linear methodology, and used target cost to back-calculate the current total impact as the dependent variable. Thereby, the team would be able to benchmark actual impact against the theoretical impact given the process follows industry standard learning trends.

Figure 6-9: User interface for generation of the theoretical learning curve. Variable learning curve parameters are displayed in tan colored cells. Parameters are laid out in a context aligned to performance goals of manufacturing teams for ease of use.

While costs do decrease because of gradual learning in a manufacturing process, often these cost decreases are results of operations improvement projects that provide step decreases in cost per engine. The risk mitigation tool tracks intervention projects and projected improvements alongside natural learning to display a prediction of cost progression taking planned projects into account. Figure 6-10 shows the tool input for action plan that later feeds into the final comparison output of all curves.The final data input is actual data which is simply the actual measured impact of each engine after production. This set of data is used for retroactive analysis to determine how close the program came to meeting the action plan, and how the program compares to an industry standard theoretical learning trajectory. During the course of risk mitigation management, this tool easily

75 Figure 6-10: Action plan input including cost impact, timeline, and unit incorporated. This sample action plan shows interface that manufacturing management can use to plan and track improvement efforts on high risk parts. Ownership, timeline, and unit incorporation all provide visibility and accountability in implementation of the risk mitigation effort. receives inputs from the risk assessment dashboard to allow tracking of risk per part or engine produced. Figure 6-11 depicts the final output of the risk mitigation tool comparing all input data side by side to give a comprehensive understanding of risk mitigation progress.As shown in Figure 6-11, risk management is done through comparison of the four curves in the risk mitigation chart. Theoretical learning shows expectation of natural cost decrease per engine given the industry standard learning rate. Since the industry standard rate is 85%, this curve represents a trajectory such that each successive engine is 15% less risky to produce than the previous unit. The action plan curve includes natural learning benefits and integrates the discrete improvement projects input into the action plan portion of the tool. Risk decreases due to learning are generally a result of manufacturing teams becoming better at producing parts due to repetition and practice. The learning curve expects the first unit to be the most difficult to produce while each successive unit becomes easier as the processes become more familiar to manufacturing teams. Process improvement projects provide discrete improvements in risk on top of natural learning. For exam- ple, the procurement of a new machine to improve quality in a manufacturing process will provide value on top of learning and is documented separately as an action plan. Actual impact values are plotted around these curves to track performance of the program in meeting risk mitigation targets. Finally, the target final cost is shown to give an understanding of feasibility of meeting program target. This tool provides a political capital to manufacturing teams in negotiating reasonable production goals

76 as quantitative evidence of discrepancy between targets and theoretical learning rate.

Figure 6-11: Final risk mitigation curve output with all four benchmarks represented. Comparison of the learning curve, program targets, action plan, and actual data gives a comprehensive understanding of risk mitigation progress. Action plan and actuals can be benchmarked against risk reduction due to standard learning, and targets can be continuously re-evaluated based on current state.

Implementation of the learning rate framework based risk mitigation tool is both part of the quantitative risk assessment process laid out in this thesis and the overall standardization of tools focus of the MRL team. This phase of the process specifically strengthens the connection between the MRL team and manufacturing teams, as the risk output directly feeds into improvement management which is owned by each individual team.

77 6.4 Future Work

As the MRL team continues to redefine the risk assessment and management process at Pratt & Whitney, the results of this project set the groundwork for a large push to- ward integration of real time data into the risk assessment process. The three phases of this project support specific challenges currently met by the qualitative risk assess- ment process, but also comprise a single comprehensive quantitative risk assessment methodology that can be used in tandem with the legacy industry approved methods. Going forward, most of the development of this project will be in integration into central data analytics systems. The IT team is currently obtaining data connection approval with several sources that will allow for transition from the semi-automated proof of concept model to a fully automated and customizable dashboard. Concerning the internal logic of the risk assessment process, a major hurdle for this project was determining methods of retroactively calculating risk impact for previous programs. Ultimately, due to lack of data collection infrastructure at the time, the team was unable to build a reasonable set of empirical risk data. However, following the completion of this project, tools from this thesis have set up the ability of the program to collect operations data next to empirical risk data in order to build a training set. While the current risk calculation methods were determined using knowledge from industry experts, and research into the cost structure of the organization, a purely quantitative method of generating the risk formula would yield interesting results. Now that the tools are built, the MRL team has the ability to continuously learn from trends in implementation of the quantitative risk assessment process.

78 Chapter 7

Conclusions

The hypothesis of this project was that development of standard processes and tools for identifying, analyzing, and mitigating production risk using real time operations data significantly enhances the ability of management and manufacturing teamsto understand and react to operations inefficiencies that reduce final production capac- ity. Figure 7-1 shows the flow of information through the Quantitative Risk Assess- ment process developed by this thesis, and the resulting outputs used to validate the hypothesis. Through development and initial implementation of the quantitative risk assess- ment process, the results showed significant improvements in risk assessment effi- ciency. The quantitative process introduced automation and numerical methods into parts of the legacy risk assessment process that normally required large amount of non-value added time to the Operations organization. Rather than having industry experts spend weeks requesting and manipulating data, the outputs of this project centralize critical data and standardize analysis allowing them to focus on applying their expertise to complex problems. Integration of quantitative operations metrics into the legacy qualitative system provided a purely quantitative output that had the ability to be aggregated describing any group of parts in an engine program. This allows management to use the same method of analysis at a part level and program level maintaining consistency between assessments.

79 Figure 7-1: Information flow diagram depicting inputs and outputs of Quantitative Risk Assessment process. Shows movement of data from independent sources to Ops- BOM and final dashboard as well as final validation interaction with current MRL process.

Finally, development of mitigation tools increases communication between the MRL team and individual manufacturing teams, and enables manufacturing teams to take ownership of MRL and general risk improvement efforts. Streamlining the handoff between MRL and manufacturing significantly increases the Operations orga- nization’s ability to react and adapt to risk through the lifecycle of an engine program. Due to the many observed benefits from initial implementation of the quantitative risk assessment process and identified future benefits as the project continues togrow within Pratt & Whitney, this thesis was able to produce a standardized process that enhanced the ability of the Operations organization to understand and react to operations inefficiencies in turbine engine programs.

80 Bibliography

[1] Pratt & whitney’s purepower○R geared turbofanTM engine cited as an aviation climate solution. ThomasNet News, page 45, 2015.

[2] James R Crawford. Learning curve, ship curve, ratios, related data. Lockheed Aircraft Corporation, 144, 1944.

[3] J. Paulo Davim. Progress in lean manufacturing. Management and industrial engineering. Cham, Switzerland : Springer, 2018., 2018.

[4] Dan Fisher. Ready for takeoff. Forbes, 191(2):34, 2013.

[5] Peter Flinn. Managing technology and product development programmes : a framework for success. Hoboken, NJ : John Wiley & Sons, Inc., 2019., 2019.

[6] Mihály Héder. From nasa to eu: the evolution of the trl scale in public sector innovation. Innovation Journal, 22(2):1 – 23, 2017.

[7] A Pirayesh Neghab, A Siadat, R Tavakkoli-Moghaddam, and F Jolai. An inte- grated approach for risk-assessment analysis in a manufacturing process using fmea and des. In 2011 IEEE International Conference on Quality and Reliability, pages 366–370. IEEE, 2011.

[8] Liaqat A. Shah, Alain Etienne, Ali Siadat, and François Vernadat. Process- oriented risk assessment methodology for manufacturing process evaluation. In- ternational Journal of Production Research, 55(15):4516 – 4529, 2017.

[9] M.P. Sullivan and S.F. Udvar-Hazy. Dependable engines: the story of Pratt & Whitney. Library of flight. American Institute of Aeronautics and Astronautics, 2008.

[10] Charles J Teplitz. The learning curve deskbook: A reference guide to theory, calculations, and applications. Quorum Books, 1991.

[11] Israel Tirkel. Yield learning curve models in semiconductor manufacturing. IEEE transactions on semiconductor manufacturing, 26(4):564–571, 2013.

[12] Bob Trebilcock. How they did it: Pratt & whitney’s ramp up. Supply Chain Management Review, 20(3):12 – 17, 2016.

81 [13] John X. Wang. Lean manufacturing : business bottom-line based. [electronic resource]. Boca Raton, FL : CRC Press, c2011., 2011.

[14] Michael J Ward, Steven T Halliday, and J Foden. A readiness level approach to manufacturing technology development in the aerospace sector: an industrial ap- proach. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 226(3):547–552, 2012.

82