A Project-Oriented Model-Based (MBSE) Approach for Naval Decision Support

by Z I. Jenkins

B.S. in Nuclear Engineering Technology, December 2013, Excelsior College M.S. in Engineering Management, May 2015, The George Washington University

A Praxis submitted to

The Faculty of The School of Engineering and Applied Science of The George Washington University in partial fulfillment of the requirements for the degree of Doctor of Engineering

January 8, 2021

Praxis directed by

Timothy Blackburn Professorial Lecturer of Engineering Management and Systems Engineering

The School of Engineering and Applied Science of The George Washington University certifies that Z I. Jenkins has passed the Final Examination for the degree of Doctor of

Engineering as of November 12, 2020. This is the final and approved form of the Praxis.

A Project-Oriented Model-Based Systems Engineering (MBSE) Approach for Naval Decision Support

Z I. Jenkins

Praxis Research Committee:

Timothy Blackburn, Professorial Lecturer of Engineering Management and Systems Engineering, Praxis Director

Amir Etemadi, Assistant Professor of Engineering and Applied Science, Praxis Co-director

Aristos Dimitriou, Professorial Lecturer of Engineering Management and Systems Engineering, Praxis Co-director

ii

© Copyright 2020 by Z I. Jenkins All rights reserved

iii

Dedication

First, I wish to express sincere gratitude to my mother, Margaret H. Jenkins, my father, Kenneth Ira Jenkins, and my wife, Mary Nichole Jenkins, whose dedication and support helped me through the process of obtaining my doctorate degree. In addition, I also wish to dedicate this writing to my son, Zen Ira Jenkins, though not yet one year old at the time of this writing, made me realize how much harder I must strive in order to succeed and provide him with a legacy of which I may be proud.

iv

Acknowledgements

The author wishes to acknowledge the guidance and assistance provided by the advisors at George Washington University, Dr. Amir Etemadi, Assistant Professor of

Engineering and Applied Science, and Dr. Timothy Blackburn, Professorial Lecturer of

Engineering Management and Systems Engineering. In addition, appreciation goes to

Paul Weinstein and Patrick Buckley of the Naval-Air Systems Command, whose permission and support was critical for this research. Furthermore, the author wishes to acknowledge the assistance of Fleming Paredes of the Naval-Air Systems Command, whose engineering support aided in Java code development. Lastly, the author wishes to acknowledge the support provided by all other stakeholders who made this research possible.

v

Abstract of Praxis

A Project-Oriented Model-Based Systems Engineering (MBSE) Approach for Naval Decision Support

Systems engineering projects in the Navy require the use of a singular engineering method that lacks tailorability, and while this method works for large scale projects, it has also caused small-scale projects (those with less than 1000 function points) implementing

MBSE to fall behind schedule in the technology maturation phase. This study describes the design and implementation of a Decision Support Tool (DST), which fills the gap in guidance while improving schedule performance for small-scale MBSE projects. Using this specific DST, the Navy can input a set of parameters, and the DST will generate a recommended approach to include the optimal modeling tools, language, framework, and method to be applied in the technology maturation phase.

Results indicate that implementing the DST described in this study removed the original schedule slip in the technology maturation phase. This allows for the benefits of

MBSE to be explored without negative schedule impact. Ultimately, this research resulted in a measurable benefit to the projects that implemented the specifically designed approach, and proves that given certain flexibility and guidance, small projects can successfully implement MBSE in a matter that is repeatable without negative schedule risk.

vi

Table of Contents

Dedication ...... iv

Acknowledgements ...... v

Abstract of Praxis ...... vi

Table of Contents ...... vii

List of Figures ...... x

List of Tables ...... xi

List of Symbols ...... xii

List of Acronyms ...... xiii

Glossary ...... xiv

Chapter 1—Introduction ...... 1

1.1 Background ...... 1

1.2 Research Motivation ...... 2

1.3 Problem Statement...... 3

1.4 Thesis Statement ...... 3

1.5 Research Objectives ...... 4

1.6 Research Questions and Hypotheses ...... 4

1.7 Scope of Research ...... 6

1.8 Research Limitations ...... 7

1.9 Organization of Praxis ...... 7

Chapter 2—Literature Review ...... 8

2.1 Systems Engineering Overview ...... 8

2.2 Pillars of MBSE ...... 12

vii

2.3 Language and Complexity ...... 17

2.4 Method and Evolving Technologies ...... 22

2.5 Framework Dependencies ...... 27

2.6 Tools and Cost ...... 31

2.7 Staffing on MBSE Projects ...... 32

2.8 Conclusion ...... 33

Chapter 3—Methodology ...... 35

3.1 Methodology Overview ...... 35

3.2 Hypothesis Formulation ...... 35

3.3 Navy Program Review ...... 38

3.4 DST Formulation ...... 40

3.5 Additional Data Gathering...... 56

3.6 Analysis Methods ...... 57

3.7 Methodology Summary ...... 58

Chapter 4—Results ...... 60

4.1 Introduction ...... 60

4.2 DST Implementation Data ...... 60

4.3 Analysis Results ...... 62

4.4 C2 Sensor Case Study Overview ...... 67

4.5 C2 Sensor Case Study Execution ...... 70

4.6 C2 Sensor Case Study Results ...... 72

4.7 Project K Case Study Overview ...... 74

4.8 Project K Case Study Execution ...... 78

viii

4.9 Project K Case Study Results ...... 79

4.10 Conclusion ...... 81

Chapter 5—Discussion and Conclusions ...... 83

5.1 Discussion...... 83

5.2 Conclusions ...... 84

5.3 Contributions to Body of Knowledge ...... 85

5.4 Recommendations for Future Research ...... 86

References ...... 88

Appendix A ...... 96

Appendix B ...... 97

Appendix C ...... 105

Appendix D ...... 106

ix

List of Figures

Figure 1. The MBSE Process ...... 12 Figure 2. The MBSE Pillars ...... 17 Figure 3. OOSEM Process for Technology Maturation ...... 26 Figure 4. DoDAF Viewpoints ...... 28 Figure 5. UPDM Measurable Element Example ...... 29 Figure 6. UAF Measurable Element Example ...... 30 Figure 7. 2-Sample Power Curve ...... 39 Figure 8. W, X, Y, and Z Concept Mapping...... 40 Figure 9. Method (X) Process Flow ...... 45 Figure 10. Framework (Y) Process Flow ...... 49 Figure 11. Language (Z) Process Flow ...... 52 Figure 12. Tool (W) Process Flow ...... 54 Figure 13. Equal Varience Test Results ...... 62 Figure 14. Normality Test Results ...... 63

x

List of Tables

Table 1. Project Function Point Distribution ...... 37

Table 2. DST Implementation Data ...... 61

xi

List of Symbols

a. ……………………………………………………... Percent of New Innovations b. …………………………………………… Percent of Preexisting Requirements c. …………………………………...…………. Percent of New or Untrained Staff d. ……………………………………………………. Model Layers of Abstraction

Ta. ……………………………………………...………………………. Actual Time

Tp. ……………………………………………………………………. Planned Time

W. ……………………………………………….………………………………. Tool

X. ………………………………………………………………………….…. Method

Y. ……………………………………………………………………..…. Framework

Z. ……………………………………………………………………….…. Language

xii

List of Acronyms

DBSE. …………………………...……………. Document-Based Systems Engineering

DoD. …………………………………...………………………. Department of Defense

DoDAF. …………………...……… Department of Defense Architecture Framework

DST. ……………………………...…………………………….. Decision Support Tool

FOUO. ………………………...……………………….….………For Official Use Only

GMF. ………………………...……………………… Graphical Modeling Framework

IBM. ………………………..………… International Business Machines Corporation

IDEF. …………………………………………………………….. Integrated Definition

IFPUG. ……………………………...……. International Function Points User Group

MBSE. ……………………………………….……. Model-Based Systems Engineering

MODAF. ……………………………… Ministry of Defense Architecture Framework

OOSEM. …………………………….. Object-Oriented Systems Engineering Method

OPM. ………………………………………………………….. Object Process Method

PEO. ……………………………………………………..… Program Executive Office

ROSE. ………………………………..…… Relational Oriented Systems Engineering

SysML. ………………………………………………….. Language

U&W. ……………………………………. Unmanned Aviation and Strike Weaponry

UAF. ………………………………………………... Unified Architecture Framework

UPDM. ………………………………………….. Unified Profile for DoDAF/MODAF

UML. ……………………………………………………... Unified

XML. ……………………………………………………Extensible Markup Language

xiii

Glossary

Artifact Definition Black-box An entity where the internal structure of the system is either not known or not considered during development. Document-Based A systems engineering method where engineers manually generate Systems and document artifacts in the form of text documents, Engineering spreadsheets, diagrams, and presentations to support the systems engineering lifecycle. Function Point A single discrete, unique function that a system shall perform. Interoperability The ability for like and unlike systems to exchange data and operate in a unified manor. Model-Based A systems engineering method where engineers use a model and Systems various modeling techniques to generate engineering artifacts and Engineering support the systems engineering lifecycle. Modeling A set of instructions that provide common viewpoints, or Framework additional context that standardizes and enriches the underlying data. Modeling A technique used to graphically document elements, components, Language and associated relationships. Modeling A sequential set of tasks that a modeling team performs to create a Method system model. Modeling Tool A program that allows for the modeling language to be documented in an underlying database in accordance with the associated modeling language. Open Services A software interface that allows tools to integrate data. for Lifecycle Collaboration System Function A single discrete, unique function that a system shall perform. Systemigram A set of diagrams that facilitate discussion of systems thinking.

xiv

Systems A complex method of utilizing various thinking and organizational Engineering methods to plan, design, and manage projects over the system life cycle, from the initial concept of operations, through development, implementation, operation, maintenance, and ending in system disposal. Tailorability Flexibility given to a project or task to venture outside standard enterprise environmental practices. Technology The project phase where the project is refining user parameters, Maturation developing the systems engineering plan, performing system requirements review, performing system function review, and preparing for product development or manufacturing. This is similar to the planning phase in the institute's project lifecycle. White-box An entity where the internal structure of the system is known and is considered during development.

xv

Chapter 1—Introduction

1.1 Background

This research studies the effects of applying a specifically designed decision support tool (DST) to guide and improve schedule performance on small-scale Model-

Based Systems Engineering (MBSE) projects in the Navy. Projects that implement

MBSE in the Navy are bound to a certain object-oriented approach; however, the approach lacks guidance and restricts small-scale projects by constraining them to the same set of rules as larger ones. This has caused small-scale projects that implement

MBSE to fall further behind schedule than projects that request exemption and use classical Document-Based Systems Engineering (DBSE). However, the use of DBSE is not preferred, as there are many components of MBSE which could be advantageous when compared to DBSE. Some advantages include higher system integration, improved reliability, improved product quality, improved training, design reuse, reduced coding, and reduced development risk. The ideal solution would be one in which the user could have all the advantages of MBSE without experiencing any negative cost or schedule impact.

Implementing a decision support tool (DST) will fill the gap in guidance while improving schedule performance for small-scale MBSE projects. Using this specific

DST, the Navy could input a set of parameters, and the DST would generate a recommended approach to include the optimal modeling tools, language, framework, and method. These four building-blocks are the primary components utilized when implementing MBSE on a project. Following such an approach would result in a measurable schedule benefit when compared to current MBSE practices.

1

1.2 Research Motivation

As an MBSE practitioner for over 10 years, the author has directly contributed to the Navy’s systems engineering transformation from a Document-Centric approach to a

Model-Centric approach. In recent times, the MBSE concept has gained large traction in the workforce, to the point that the Navy is starting to mandate that all systems engineering projects switch from Document-Based Systems Engineering (DBSE) to

MBSE. The problem is that sometimes great concepts can outperform an organization’s ability to properly implement them. In this case, the author has witnessed many small- scale projects fail after switching to MBSE because they are instructed to implement a complex new process, without guidance, while simultaneously constraining projects to a single approach. These policies seem to have a net-positive effect on large scale projects exceeding $100 MM with thousands of function points. However, small programs with lesser visibility seem to struggle to achieve these same results.

Through the development of this study, a new approach has been tested and implemented on over 17 projects. The goal of this research is to improve MBSE implementation for small-scale systems engineering projects, act as a decision support guide for the Navy, and prove that given certain flexibility and guidance, small projects can successfully implement MBSE in a matter that is repeatable without an associated schedule risk. These results will directly benefit Naval technology maturation, which in turn promotes maritime dominance and the systems engineering community as a whole.

2

1.3 Problem Statement

Navy's requirement that systems engineering projects use the object-oriented systems engineering method caused projects with less than 1000 function points to miss schedule deadlines in the technology maturation phase.

The problem identified for this praxis is that smaller projects (projects with less than 1000 function points), seem to miss schedule deadlines in the technology maturation phase. The purpose of the technology maturation phase is to “reduce technology, engineering, integration, and life cycle cost risk to the point that a decision to contract for

Engineering and Manufacturing Development (EMD) can be made with confidence in successful program execution for development, production, and sustainment” (DoD

Instruction 5000.02, 2020). To summarize, it is remarkably similar to a traditional

“Project Planning” phase except that it includes the development of system requirements and system design. Technology maturation was chosen because it is the phase where

MBSE is most crucial to overall project success. Smaller-scale projects were chosen because the negative schedule impact of MBSE was not observed on larger projects.

1.4 Thesis Statement

Implementing a specifically designed decision support tool tailored for Navy’s systems engineering projects with less than 1000 function points will increase adherence to schedule in the technology maturation phase.

A decision support tool that could provide the necessary guidance regarding the key components of MBSE (tool, language, framework, and method) would provide the necessary guidance and flexibility to reduce the negative schedule impact mentioned in the problem statement.

3

1.5 Research Objectives

The primary objectives of this research is to establish a proven and repeatable process that will guide projects through MBSE implementation, demonstrate a direct measurable schedule benefit for small-scale projects that elect to utilize the specifically designed process, and demonstrate the advantages of MBSE when compared to DBSE.

These three objectives will sufficiently highlight the need for the DST, demonstrate the benefits of MBSE, while simultaneously solving the schedule issue in the technology maturation phase.

1.6 Research Questions and Hypotheses

Below is the list of research questions:

RQ1: What are the main factors that caused excess median schedule delays with

MBSE projects when compared to DBSE in the technology maturation phase?

RQ2: What are the factors that caused certain outlier MBSE projects in the

technology maturation phase to outperform others of similar type?

RQ3: What components of a MBSE implementation are useful for the purpose of

creating a DST?

RQ4: What are the advantages and disadvantages of implementing a specifically

designed DST when compared to the Navy’s current method?

These research questions were selected as an overall set of high-level questions that were discovered through the development of this study. The first question that needed to be answered involved the main factors that caused excess median schedule delays. As mentioned previously, small-scale projects seem to have schedule delays in the technology maturation phase. The implementation of the DST will show whether

4 those factors involved the components identified by the DST. Another question this research will answer is what factors caused certain outlier MBSE projects. As a whole, small-scale MBSE projects performed worse than DBSE by an average of 25 percent in the technology maturation phase, but there were a few positive outliers in the data set.

This research will help to solve the outlier question and hopefully push more projects to perform similar to the positive outliers. Another element that needs to be identified is the key components of MBSE implementation. Lastly, this study will resolve the overall advantages of implementing the DST when compared to current practices. Answering this question will highlight specific advantages, disadvantages, and challenges that need to be overcome for wide-scale implementation.

Below is the list of research hypotheses:

H1: If the Navy implements MBSE on projects with less than 1000 function

points, then projects will fall behind schedule in the technology maturation phase

when compared to projects using classical DBSE.

H2: If the Navy implements a DST for MBSE method selection on projects with

less than 1000 function points, then projects will increase adherence to schedule

in the technology maturation phase.

Confirmation of the first hypothesis demonstrates that the problem exists in the first place. The claim is that small-scale projects implementing MBSE fall behind schedule in the technology maturation phase when compared to small-scale projects implanting DBSE. The first hypothesis tests this theory by analyzing a statistically significant set of projects that utilize DBSE and comparing the data to another set of similar projects that utilize MBSE in the technology maturation phase.

5

While confirmation of the first hypothesis demonstrates that the problem exists, the second hypothesis tests whether the implementation of the DST will solve the problem identified. This will be achieved by analyzing a statistically significant set of projects that implemented the DST and comparing the data to the previous set of similar projects that utilize DBSE in the technology maturation phase. If there is no statistical significance between the two sets of projects, then one can conclude that DST implementation was successful.

1.7 Scope of Research

This research is focused on a statistically significant set of MBSE projects within the Navy.

First, this research covers the following MBSE Methods: Object-Process Method

(OPM), Relational Orientation for Systems Engineering (ROSE), and

Design (SA&D), Vitech, and Object-Oriented Systems Engineering Method (OOSEM).

Second, this research covers projects in the technology maturation phase. The scope has been limited to the technology maturation phase to mitigate DST execution risk in terms of project schedule. Small-scale projects can take several years to traverse the project lifecycle, so a single phase had to be chosen. Technology maturation was selected because it is the most architectural intense phase and thus has the largest probability to impact project schedule.

In addition, this research covers the following MBSE Frameworks: Department of

Defense Architecture Framework (DoDAF), Ministry of Defense Architecture

Framework (MODAF) Unified Profile for DoDAF/MODAF (UPDM), and Unified

Architecture Framework (UAF).

6

Furthermore, this research covers the following MBSE Languages: OPM, Icam

Definition for Function Modeling (IDEF), Unified Modeling Language (UML) and

Systems Modeling Language (SysML).

Lastly, this research covers the following MBSE Tools: Papyrus, Sparx Enterprise

Architect (Sparx EA), MagicDraw, and Systems Design Rhapsody (SDR).

1.8 Research Limitations

The first limitation of the research is that methods, languages, frameworks, or tools not mentioned previously will not be covered. The second limitation is that this research only covers the technology maturation phase of the project, and other phases will not be explored. The third limitation is that this research covers projects that have less than 1000 function points. The fourth limitation is the number of projects analyzed, as only 17 projects per-data set will be analyzed for statistical significance at α = 0.05.

Additional projects will not be explored for higher tolerances. The fifth limitation is that all projects used in the study were executed by the government, with the government being the architecture design authority. The sixth and final limitation is the business or organization. This project is limited to projects within the Navy and do not represent any other external organization.

1.9 Organization of Praxis

This praxis is organized starting with Chapter 1, which is an introduction and overview of the problem and the proposed solution. Chapter 2 is a literature review which covers impactful MBSE studies. Chapter 3 covers methodology for this research. Chapter

4 shares results and findings of this study. Chapter 5 is the conclusion, and it also offers some suggestions for possible future investigations.

7

Chapter 2—Literature Review

2.1 Systems Engineering Overview

Systems engineering involves a complex method of utilizing various thinking and organizational methods to plan, design, and manage projects over the system life cycle, from the initial concept of operations, through development, implementation, operation, maintenance, and ending in system disposal. There are many methodologies for systems engineering, but there are two main fundamental ones that govern how the systems engineering process is completed. The first being a document-based approach to engineering called document-based systems engineering (DBSE), and the other is a model-centric approach called model-based systems engineering (MBSE).

Using DBSE, systems engineers manually generate and document artifacts in the form of text documents, spreadsheets, diagrams, and presentations for a project. These artifacts include concept of operations documents, requirement specifications, requirement traceability and verification matrices, interface definition documents, matrixes of interfaces, architectural description documents, system design specifications, test case specifications, and specialty engineering analyses (Delligatti, 2014, p. 2). While models are sometimes used in DBSE, the use of architecture is more informative and less model driven. For example, when performing a requirement traceability study, a model can be used to verify that a specification includes necessary design elements, but the model will not be used to generate the specification itself. Another instance of model use in DBSE would be when an architecture is utilized to document system design for the purpose of gate reviews or training without leveraging the underlying database.

8

Under the DBSE approach, there are three main functions: requirements analysis, functional analysis, and synthesis. Various artifacts are created during this process such as system specifications, system element databases, and defined interface matrices. All products mentioned in the development process are created as the functions are executed, typically, without the use of an architectural model.

Although DBSE has advantages, there are also many disadvantages. Without the use of a single repository, source truth cannot be easily achieved. For example, data querying can be challenging without a model repository. In order to create a new system based on a previous system, design reuse is preferred; as researchers in the field claim,

“efficient retrieval of similar process models or process fragments is helpful for users… to select, customize and establish their new processes from existing process model repositories” (Huang, Peng, Feng, & Feng, 2018, p. 821). Under a DBSE construct, efficient retrieval of design information is more challenging, which can lead to schedule delays or gaps in system design.

Other problems with document-based systems engineering include the consideration of seemingly immaterial aspects that can aggregate to a serious design flaw when approached holistically. An example of this could be resilient systems. In this case, identification of potential hazards and the effect to underling systems can be complex and aspects can be overlooked without the use of an architecture. This is the situation when

“existing methods fail to accommodate dynamic threat environment, contain unwanted ambiguity, lack architectural clarity, and fail to present an implementable methodology”

(Nuss, Blackburn, & Garstenauer, 2018, p. 3393). In contrast, by modeling system

9 resilience with MBSE such as using state transition diagrams and system resource diagrams, the probability to aid resilience capture and response is significantly improved.

The MBSE approach is “the formalized application of modeling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases” (Hart, 2015). Essentially, MBSE uses digital models and manages projects in a data-rich environment, having a unified coherent model as the primary artifact. All elements are managed using data-centric specifications, which enable automation and optimization. Furthermore, “MBSE has the potential to simplify the complexity of managing the development process across multiple disciplines and domain-specific tools through progressively higher levels of abstraction, encompassing all phases of the system life cycle” (Kalawsky, O'Brien, Chong, Wong, Jia, Pan, & Moore, 2013, p. 593). In

MBSE, classical DBSE products such as word documents, spreadsheets, and other documents are still utilized. The difference is that these products are not manually created, rather they are generated as an output from the underlying model. In this sense, all systems engineering data is completely contained within the model to include all requirements, interfaces, behaviors, and structure. Mathematical algorithms can even be contained in a model because an “algorithm can be abstracted as a system and each internal function considered as a subsystem, allowing representation using Model Based

Systems Engineering (MBSE)” (Weiss, Chung, & Nguyen, 2019, p. 65). By creating robust architectural artifacts, the model builds an underlying database that acts as a single source of truth from which all systems engineering products can be generated. This

10 allows for configuration management, analysis, and control to inherently occur throughout MBSE implementation, as such functions are managed at the repository level.

Although MBSE has many advantages, there are also many negative factors associated with the methodology, including complexity and adaptation resistance. In the

22nd IEEE International Conference (2019), presenter Savary-Leblanc claims that despite its theoretical advantages, the MBSE methodology remains difficult to apply. Some challenges mentioned include “an increasing diversity of users to deal with,” and that

“current modeling tools are pushed to their limits” (Savary-Leblanc, 2019, p. 648). Other practitioners have also identified other major challenges when applying the method, and among these challenges, “modeling tools often appear as a key barrier that still slows down the spread of the modeling approach” (Hutchinson, Rouncefield, Burden, & Heldal,

2013, p. 1). Between tool limitations and complexity concerns, it is clear that MBSE may not be ideal for every single systems engineering project. However, throughout this research, different methods will be explored to reduce these challenges.

The diagram in Figure 1 considers the processes previously identified and provides a more model-centric approach. The high-level changes in this diagram involve the inclusion of model repositories, and the shifting of analysis and control functions to interact directly with the repository. Other changes include the inclusion of use cases, operational architecture, system architecture, subsystem architecture, white-box, and black box constructs.

11

Figure 1. The MBSE Process Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2020.

Like DBSE, this process is iterative, includes design loops, and outputs system specifications and baselines. For a complete understanding of MBSE, one would need to understand the different attributes and components that MBSE is comprised of.

2.2 Pillars of MBSE

In the systems engineering community, there is much debate as to what comprises

MBSE. The common approach is the three-pillar concept popularized by Lenny

Delligatti, whereby MBSE can be organized into three main pillars: language, method, and tool. Under this construct, language is “a semiformal language that defines the kinds

12 of elements…, the allowable relationships between them, and—in the case of a graphical modeling language—the set of notations [used] to display the elements and relationships on diagrams” (Delligatti, 2014, p. 5). Essentially, the language is used to graphically document elements, components, and associated relationships. There is also typically a back-end portion of a language that provides guidance for the recordation of such elements and interfaces to an underling database.

The next main concept refers to the method. A modeling method is “a documented set of design tasks that a modeling team performs to create a system model… [and] ensures that everyone on the team is building the system model consistently and working toward a common end point” (Delligatti, 2014, p. 6). In this sense, a method provides order. For example, if the goal is to design a new car, one method may dictate to initiate the design from a user-need prospective, considering factors such as range, mileage, and number of passengers. Then, the architect would define the components that correlate to the pre-defined higher-level needs. Another method may start with the known lower-level components due to vendor lock such as brakes, axle, battery, and alternator. Then, the architect would work upwards to correlate high-level needs. Without debating which method is better than the other, it is clear that in both scenarios, one can utilize the same modeling language but have a completely different approach on how to create an architecture.

The last pillar is the modeling tool. Modeling tools are “designed and implemented to comply with the rules of one or more modeling languages, enabling [one] to construct well-formed models in those languages” (Delligatti, 2014, p. 7). The modeling tool allows for the modeling language to be documented in an underlying

13 database in accordance with the associated modeling language. The modeling tool is considered an essential pillar because while languages provide guidance for database recordation, they still leave a large room for ambiguity. For example, if one individual was constructing a word document on a typewriter and another was using Microsoft

Word, there are clear interoperability issues associated. However, both individuals may be using the same language such as English, and the same method such as the 7-Stage

Writing Process in accordance to the Education Endowment Foundation (EEF), but because they are using two different tools, the two individuals would not be able to easily collaborate. Modeling tools also govern which external applications the architecture products can integrate with. Tools that are Open Services for Lifecycle Collaboration

(OSLC) compliant, for example, “enable conforming independent software and tools to integrate their data, control, process, and presentation during the entire life cycle” (Lu,

Wang, & Torngren, 2020, p. 1299). This also justifies the significance of tool selection, for without the proper tool, the architecture may not be as useful because the underlying repository cannot be easily accessed.

While the three-pillar concept provided a good foundation to MBSE, there are other concepts that separate the methodology into different perspectives. The researchers,

Holt, Perry, Payne, Bryans, Hallerstede, and Hansen (2014), also describe MBSE as three different pillars: process, tools, and people, which are different than those of Delligatti.

Under this construct, process is a model-based approach whereby there is “a set of processes in place, which is properly deployed and available” (Holt et al., 2014, 252).

Essentially, the process pillar is remarkably similar to the method pillar that Delligatti mentions, combined with some aspects of language.

14

The tools aspect from Holt et al.’s study is also very similar to the tools mentioned previously. However, Holt et al.’s concept is more automation-centric, and the tools include external tool interoperability considerations. Using the previous example of a word processor, Microsoft Word may be preferred over Notepad due to its ability to easily integrate excel spreadsheets.

The last pillar that the researchers Holt et al. mention is people, which is rather unique. They consider people to be part of the necessary elements to implementing

MBSE. This includes education, training, and experience. If someone does not know how to use Microsoft Office, then all advantages gained by using Microsoft Word over a typewriter are lost, even if all parties are using the same language, method, and tool.

Another tiered concept that analyzes the foundation of MBSE is explained by

Mazeika, Morkevicius, and Aleksandraviciene in the article “MBSE Driven Approach for

Defining Problem Domain” (2016). Under this construct, MBSE is comprised of two main elements: methodology and framework. The authors claim that methodology consists of source requirements, behavior, architecture, validation, and verification (p. 2).

The authors provide examples on methodology such as IBM Harmony, and OOSEM, which are identical to Delligatti’s definition of method. However, the article introduces a different concept, which is framework. Framework is explained as something that

“provide[s] a couple of viewpoints: one to define the problem domain and other to provide the solution for it, such as operational and systems viewpoints in DoDAF, logical and resources viewpoints in NAF and MODAF, business and engineer views in Zachman framework” (p. 2). The authors assume that the audience reading the article is already knowledgeable in MBSE frameworks such as DoDAF and did not provide full

15 explanations. Essentially, a framework can be summarized as a set of instructions that provide common viewpoints, or additional context that standardizes and enriches the underlying data. Using collegiate writing as an example, the language is English, the method is the 7-Stage Writing Process in accordance to EEF, the tool is Microsoft Word, and the framework may be the American Psychological Association (APA) style.

Framework is therefore another important aspect that must be considered.

For the purposes of this study, various MBSE parsing techniques have been evaluated, explored, and compared. Upon analysis and review, the foundational pillars of

MBSE have been concluded to be Language, Method, Tool, Framework and People. In this sense, language effects the interface between the people, process, and systems.

Method describes the processes and techniques that are used to produce the desired product. Framework provides context and guides the style during product development.

Tool impacts the underlying database and the interface with external applications. People control the efficiency of a particular effort, which is based upon available staffing, education, and experience. All five factors have been identified as potential aspects that affect MBSE throughout the various stages of the project lifecycle. For this reason, all five factors were considered during development of the DST and were critical for the success of this research. The five pillar concept can be further illustrated in a diagram.

16

Figure 2. The MBSE Pillars Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2020.

Figure 2 describes all five pillars and the associated impact across the project lifecycle. The components that form each pillar have not been displayed because not only are there conflicting viewpoints regarding the main pillars of MBSE, but the individual components that each pillar is composed of also have varying approaches. Such approaches will be explored in Section 2.3 and opposing theories will be discussed.

2.3 Language and Complexity

The first pillar, language, has varying approaches. Object-Process Method

(OPM), Integrated Definition (IDEF), Unified Modeling Language (UML), and Systems

Modeling Language (SysML) have been identified as the primary languages to explore based upon popularity in research and use in the Navy.

17

The first approach that was reviewed is Object-Process Method (OPM) which is

“a holistic, integrated approach to the study and development of systems. OPM's core entities are objects, representing things that exist - denoted as rectangles, and processes, which are things that transform objects - denoted using an ellipse” (Cohen & Soffer,

2007, p. 94). OPM can sometimes be confusing when examining it from an MBSE pillar perspective. The name itself states that OPM is a method, but when examining the definition, it is clear that OPM is also a language. From a language perspective OPM has various advantages and disadvantages. OPM’s simplicity allows for engineers to architect a complex system without rigorous training. “Using a single graphical model and a corresponding, automatically generated textual specification, OPM unifies the system's function, structure, and behavior within one frame of reference that is expressed both diagrammatically and by a subset of English” (Sturm, Dori, & Shehory, 2010, p. 228).

Furthermore, because OPM is excellent at building hierarchical structures, it becomes easier to architect large systems that may have multiple mission scenarios (Dori,

Perelman, Shlezinger, & Reinhartz-Berger, 2005, p. 1). Additionally, OPM can be more advantageous than other languages when attempting to represent the high-level picture of a system (Grobshtein & Dori, 2009, p. 1) and it has other advantages with reverse engineering code and design pattern recognition (Dori et al., 2005, p. 1).

Some of the disadvantages of OPM is that sometimes the language does not allow for a detailed enough model to take into consideration aspects that arise during later stages of the design process (Grobshtein & Dori, 2009, p. 1). For example, if one wanted to imbed attributes and operations for the purpose of code generation as an output of the

18 model to provide a head start in development, such a capability would be very difficult, if not impossible, to achieve using OPM.

Another frequently used language is Integrated Definition (IDEF). According to researchers Zhou and Rong, IDEF is “a modeling approach based on structured analysis… composed of a sequence of graphs. The components of these graphs are diagrams and arrows. IDEF can be used to describe the functions of the system or process, relations among different functions, and the data that support the integration of these functions” (Zhou & Rong, 2010, p. 279). From a learning perspective, IDEF is more complex than OPM due to complexity that arises when considering symbolism and various mechanisms and controls, but it is not as complex as SysML. Similar to OPM,

IDEF has activities, functions, and entities, and they are all related and interconnected.

When these components are illustrated in a diagram, various arrows are used to describe the relations between them (Zakarian & Kusiak, 2000, p. 137). IDEF is excellent for analysis and design when considering the function of the system and the activities of the system (Zhou & Rong, 2011, p. 1198). Some of the disadvantages of IDEF is because the language is so activity focused, there can be issues when attempting to architect repositories and system structure.

Another language that is not commonly used but can be implemented in systems engineering is Unified Modeling Language (UML). Researchers claim, “In Model Based

System Engineering (MBSE), Unified Modeling Language (UML) plays a central part of the design phase” (Sabir et al., 2019, p. 158932). As UML first began, it was a consolidation of the best features of the various Object Oriented (OO) languages, and it was adopted by the Object Management Group (OMG) in 1996 as an industry standard

19

(Selic, 2003, p. 608). UML was also utilized heavily in the Navy prior to the mandate to switch all systems engineering projects to SysML. On projects that are granted language flexibility, UML is still a preferred language in the Navy for systems engineering projects that that are software intense or require software code generation. UML is “based on a four-layer meta-model structure, [that] provides 14 diagram types that can be used to describe a system from different perspectives [such as] structure, behavior, and/or abstraction levels… This helps deal with the complexity of system specification and distribute its responsibilities among different stakeholders, among other benefits” (Torre,

Labiche, Genero, Baldassarre, & Elaasar, 2018, p. 33). As mentioned, UML is excellent for architecting complex systems. UML contains 14 diagram types and thus over a hundred diagram-to symbol combinations, making the learning complexity much higher than that of IDEF or OPM. UML is typically used for software focused systems as it is most likely the least complex language that is also capable of object-oriented software code generation.

Then, there is the comprehensive Systems Modeling Language (SysML). SysML originated from UML, with other complexities added. This language “was first adopted by the OMG in July 2006 as OMG SysML and released as a formal specification in

2007” (Cloutier, Sauser, Bone, & Taylor, 2015, p. 664). SysML was created to “refine

UML and define a general purpose modeling language for systems engineering. This covers complex systems which include a broad range of heterogeneous domains, in particular hardware and software” (Vanderperren & Dehaene, 2005, p. 1). As mentioned in Section 2.1, systems engineering encompasses all phases of a project lifecycle.

Unfortunately, UML was not built with this consideration. UML lacked stereotypes for

20 items such as logistics, test integration, and hardware integration. For this reason, UML was extended to allow architects to model additional viewpoints. Hence, SysML occurs to “exchange systems engineering information amongst tools, and help[s] bridge the semantic gap between systems, software, and other engineering disciplines” (Hause,

Thom, & Moore, 2005, p. 10). These “extensions” are often required when tagged values have to be exact or there is high depth in system complexity.

High depth systems can still be modeled in OPM, for example, but software code integration is less likely. For this reason, language selection needs to consider complexity versus depth. An example for complexity versus depth can be explored by an automated teller machine (ATM). Typically, an ATM can have four main functions: deposit, withdraw, check balance, and change pin. On the surface, the system is not complex.

However, there may be advanced components such as an electromagnetic card feeder that destroys cards after too many failed attempts, separate physical storage areas to minimize theft, or LED (light-emitting diode) curved screens that minimize sun reflection. These components add depth to the system, but the main four functions remain. Alternatively, a tablet-based register can be complex as it may have the ability to convert currencies, text users their receipts, and allow the cashier to play videos or surf the web when there are no customers, but it lacks depth from an engineering standpoint because it is simply an iPad with a card reader attachment. While the iPad itself may have depth, in the register ; it does not because the iPad is treated as a black-box entity. Thus, the items that need to be architected are the numerous complex features that already exist but need to be enabled.

21

After considering the pillar of language, the next pillar analyzed was Method, covering the popular methods employed in systems engineering.

2.4 Method and Evolving Technologies

The following methods were explored based upon popularity in research and use in the Navy: Relational Oriented Systems Engineering (ROSE), Object-Process Method

(OPM), Structured Analysis and Design (SAD), Vitech SE Method (Vitech), and Object-

Oriented Systems Engineering Method (OOSEM). Each method has various characteristics that could be advantageous in varying scenarios.

The first method, Relational Oriented Systems Engineering (ROSE), is “a general systems methodology that employs a principle of model specification and relational transformation for the purpose of system specification, analysis, and design. It is similar to, but more formal than, [a] methodology… that rests upon a principle of alternatively using abstraction and interpretation for problem solving” (Dickerson & Mavris, 2013, p.

587). The authors further state that ROSE utilizes functional and hierarchical viewpoints that aligns well with legacy systems engineering for system specifications. Essentially, one defines the system using requirements as would be done under DBSE, and then organizes those requirements using a combination of modeling elements, specifications, and associations. Trade-off analyses can be performed using architectural matrices to align requirements to metrics. Classical models can be used to define engineering characteristics based upon the relation to requirements mapping. Simply put, “ROSE has a that combines features of both relational and object technologies”

(Hardwick & Spooner, 1989, p. 285). ROSE works well with less complex systems that

22 may have already begun utilizing DBSE, but would prefer to transition to a more model- centric approach.

Another frequently used method, Object-Process Method (OPM), as mentioned previously in Section 2.3, is both a language and a method. Because the five-pillar concept has not been developed until now, individuals commonly refer to languages as methods, or methods as frameworks, and some disagree on where the separation occurs.

Because OPM was created to specify how to graphically document elements, components, and associated relationships, as well as an order for executing design tasks, one can safely conclude that based upon the definitions in Section 2.2, OPM is both a language and a method. Since OPM was covered previously, additional details about

OPM’s function will be omitted in this section. As far as implementation, OPM is primarily designed for less complex projects, projects where most of the processes are already well-known, or projects that emulate other projects that already exist.

Another method, Structured Analysis and Design (SAD), focuses on describing systems as a functional hierarchy. For the purposes of this study, SAD is a separate concept than Structured Analysis and Design Technique (SADT). This is because documentation on SADT seems to focus more on the language elements and less on structured methods. According to D. M. Rickman, owner of Raytheon Systems Company,

“Structured methods have been around longer than [object-oriented] methods and in general they are more mature for large systems design. Also, most customers understand structured methods better than [object-oriented] methods. Since one of the main reasons of modeling a system is for communication with customers and users, there is an advantage in providing structured models for information exchange” (Rickman, 2001, p.

23

2). In general, structured methods have a reduced learning curve when compared to object-oriented methods. However, one of the flaws that Rickman highlights is that “the top down process of functional decomposition does not lead to a set of requirements that map well to existing components” (p. 2). Essentially, one must exercise caution when applying SAD as an upgrade to an existing system. Although SAD is designed for large system design, it also has difficulties with agile software-driven systems.

The Vitech method can sometimes be referred to as CORE, and STRATA. It is a method initially designed by the company Vitech and has been further adapted for more widespread use in the early 2000s. Although the method is less common, the core fundamentals remain. Vitech was designed to “define the functional behavior, inputs and outputs, and the physical architecture, as well as the performance and resource requirements… in order to provide a unified, consistent and traceable design” (Fisher,

1998, p. 1). The Vitech method was selected because it is still in use today, but also functionally resides between OOSEM and SAD. The Vitech method states that one starts with context diagrams, followed by creating requirement hierarchies, followed by design hierarchies, followed by tracing requirements to design, followed by functional context mappings, followed by a verification hierarchy, followed by verification and validation functions. After these functions are performed, the architect is to publish and evaluate.

Upon receiving evaluation results, the process is repeated over again. The Vitech method is ideal for projects that require more structure and may be more new, agile, or software centric, but not so complex that it requires full hardware-to-software integration or automation.

24

Of all the methods frequently used in the Navy, Object-Oriented Systems

Engineering Method (OOSEM) is by far the most detailed and widely used in the Navy.

As mentioned in Section 1.4, the Navy has mandated that all projects utilize OOSEM due to the method’s ability to facilitate complex system design and integrate it with various software components while automating tasks such as code generation to speed up manufacturing time. OOSEM utilizes “object-oriented concepts of into system engineering, making system architecting more extensible. This is particularly useful in today’s flexible system modeling with evolving user requirements and novel technology” (Gao, Cao, Fan, & Liu, 2019, p. 164054). Similar to Vitech, OOSEM has a multi-step process which is best summarized by the following: “the main activities of

OOSEM are Analyze Needs, Define System Requirements, Define Logical Architecture,

Synthesize Allocated Architectures, Optimize and Evaluate Alternatives, and Validate and Verify Systems... In systems engineering, the main activities of OOSEM are performed using several iterations” (Mazeika et al., 2016, p. 2). The OOSEM process for the technology maturation phase can be further illustrated through the use of a diagram.

25

Figure 3. NAVY OOSEM Process Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2019.

Figure 3 shows the Navy OOSEM process that has been in effect since December of 2019 for the technology maturation project phase. The process starts with the solid circle at the top of the image and ends at the target located at the bottom of the image.

Accordingly, the first activity is to perform data collection, followed by analyze stakeholder needs, followed by analyze system requirements, followed by defining the logical architecture. These processes must be performed in order and are depicted in the center of the diagram. The center processes then repeat until further refinement is no longer needed. Afterwards, the process ends with the creation of a system specification package. This snapshot only considers the technology maturation phase of a project, and each activity has elements underneath it that describes what tasks are needed to perform

26 the higher-level activity. For this reason, to show the full OOSEM process that the Navy is implementing from start to finish would take several dozen diagrams with hundreds of elements. Not only are such diagrams out of scope for this section, but they have not been made available for public release.

Overall, there are many different methods, each of which is designed to provide process guidance for MBSE. Some methods seem to be more ideal for use on new start- ups and emerging technology, while others have classical DBSE aspects that would prove to be useful when migrating existing systems to a model-centric approach. OOSEM was the last method that was studied. As such, the next pillar analyzed was Framework.

2.5 Framework Dependencies

Once the language and method have been selected, the options for framework is inherently more limited. As mentioned, frameworks provide “a common vocabulary or include a list of recommended standards and compliant products for implementation”

(Sitton & Reich, 2019, p. 2109). Frameworks utilized in this study are Department of

Defense Architecture Framework (DoDAF), Ministry of Defense Architecture

Framework (MODAF), Unified Profile for DoDAF/MODAF (UPDM), and Unified

Architecture Framework (UAF). These frameworks have been identified based upon prior method to language combinations as well as general use in the Navy.

The Department of Defense Architecture Framework (DoDAF) is a

“comprehensive framework and conceptual model enabling the development of architectures to facilitate the ability of DoD managers at all levels to make key decision more effectively through organized information sharing” (Liu & Wu, 2011, p. 777).

27

DoDAF allows for model information to be separated into different perspectives called viewpoints. The different viewpoints can be displayed in a figure.

Figure 4 describes the main 8 viewpoints represented by DoDAF. DoDAF does not explicitly specify language, but rather, content. An AV-2, for example, is an integrated dictionary. This dictionary can be represented by an Excel matrix, UML diagram, or even a text document.

All Viewpoint Capability Operational Services Standard Systems (AV) Viewpoint (CV) Viewpoint (OV) Viewpoint Viewpoint Viewpoint (SV) (SvcV) (StdV) AV-1: Overview CV-1: Vision OV-1: High- SvcV-1: Services StdV-1: Standards SV-1: Systems and Summary Level Operational Context Profile Interface Information Concept Graphic Description Description AV-2: Integrated CV-2: Capability OV-2: SvcV-2: Services StdV-2: Standards SV-2: Systems Dictionary Taxonomy Operational Resource Flow Forecast Resource Flow Resource Flow Description Description Description CV-3: Capability OV-3: SvcV-3: Systems- SV-3: Systems- Phasing Operational Services Matrix Systems Matrix Resource Flow Matrix CV-4: Capability OV-4: SvcV-4: Services SV-4: Systems Dependencies Organizational Functionality Functionality Relationships Description Description Chart CV-5: Capability OV-5: SvcV-5: SV-5: Operational to Organizational Operational Operational to Systems Development Activity Model Activity to Traceability Mapping Services Matrix CV-6: Capability OV-6: Event- SvcV-6: Services SV-6: Systems to Operational Trace Description Resource Flow Resource Flow Activities Matrix Matrix Mapping CV-7: Capability SvcV-7: Services SV-7: Systems to Services Measures Matrix Measures Matrix Mapping SvcV-8: Services SV-8: Systems Evolution Evolution Description Description SvcV-9: Services SV-9: Systems Technology & Technology & Skills Forecast Skills Forecast SvcV-10: SV-10: Systems Services Event- Event-Trace Trace Description Description Figure 4. DoDAF Viewpoints Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2020.

The Ministry of Defense Architecture Framework (MODAF) is similar to DoDAF in that it specifies the same 8 viewpoints but refers to them differently. DoDAF’s project

28 viewpoint, for example, closely aligns with MODAF’s acquisition viewpoint. Other minor differences are not in the scope for this study.

The Unified Profile for DoDAF/MODAF (UPDM) is designed to provide “a consistent, standardized means to describe DoDAF 1.5 and MODAF 1.2 architectures in

UML-based tools as well as a standard for interchange. UPDM, like DoDAF and

MODAF, is [not a] process, and it is also not a methodology” (Hause, 2010, p. 430).

UPDM is similar to DoDAF in that it separates perspective in the form of viewpoints except it is more in-depth. UPDM contains the main higher-Level Elements such as All-

View, Capability View, Project View, Operational View, Services View, Systems View, and Standards View. However, UPDM provides additional guidance as far as the actual connections used within the diagram. UPDM thus borderlines the definition of a language and can only be used with UML or SySML. An example can be provided that demonstrates the restrictions mandated by UPDM.

Figure 5. UPDM Measurable Element Example Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2020.

Figure 5 defines interaction between a measurable element and an actual measurement. Using UPDM, defining a measurement requires two objects, a measurement type and a measurement. Although this is simple, other frameworks such as

29

DoDAF do not provide measurement guidance and a single block could be utilized to define a measurement. Other frameworks such as UAF add complexity as it could take up to four blocks to define a measurement; however, this could be desired for complex systems. The measurement element has been provided as a simple example to explain framework complexity; there are hundreds of additional elements that are not included.

One more framework to consider is the Unified Architecture Framework (UAF), which is like UPDM except it contains more detail. For example, the higher viewpoints are divided into 10 separate areas: Metadata View, Strategic View, Operational View,

Services View, Personnel View, Resources View, Security View, Project View, Standard

View, and Actual-Resource View. Furthermore, element types are even further refined than that of UPDM. The level of refinement can be represented by a diagram.

Figure 6. UAF Measurable Element Example Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2020.

Figure 6 describes the dataset that is associated with a measurable element, how a measurable element interfaces with a measurement set, and the different associations involved. UPDM for example, did not delineate the difference between a measurement and a measurement set. This example is one of hundreds of architectures outlined in the

30 specification. At this level of detail, UAF borderlines both a language and a framework.

Similar to UPDM, UAF was designed to work only with UML and SysML. UAF was the last framework that was studied. After framework, the next pillar analyzed was Tools.

2.6 Tools and Cost

The following tools that are evaluated in this study are Papyrus, Sparx Enterprise

Architect (Sparx EA), NoMagic Cameo Enterprise Architect (NoMagic), and IBM

Rational Rapsody (IBM). The names of these companies will be utilized to represent their associated software, as names and revisions are constantly changing with each new release. The above tools have been selected based upon use in the Navy and the ability of these tools to utilize the previously selected languages. “Tools,” as one of the pillars of

MBSE, is listed after language and method because typically, organizations have less control over tool selection due to security reasons or limited budgets.

Papyrus is an eclipse-based open source MBSE tool that is compatible with UML,

SysML, XML, and Graphical Modeling Framework (GMF) (Eclipse Papyrus RCP, 2020-

06 release (4.8.0)). GMF can be utilized to perform OPM operations. Because Papyrus is completely open source, the software does not have any costs associated with it.

Although Papyrus is very robust, the main flaws are security, software bugs, and limited features. The US Navy has approved Papyrus installation on certain unclassified systems, but its use is still limited due to security reasons. Additionally, due to the open source nature and a decentralized software development team, Papyrus is more prone to software glitches.

Sparx EA is another modeling tool similar to Papyrus, except it is not open source and costs a fixed price of $699 at the time of writing (Enterprise Architect 15, Sparx

31

Systems, 2020). Sparx EA allows the use of UML and SysML. Upon further research,

Sparx EA can also implement IDEF (Logical Data model - IDEF1X, 2020). However, there is no mention of OPM.

NoMagic’s tool is another MBSE tool that is used to implement MBSE on a project, run engineering analysis for design decisions, run engineering analysis for trade studies, perform requirements verification and traceability analysis, and analyze metrics.

NoMagic is compatible with UML and SysML but does not have the ability to utilize

OPM or IDEF. NoMagic is more robust than Sparx EA and Papyrus, but it comes at a higher cost higher than Sparx EA (ITZ Systems Engineering Database).

IBM’s tool is the most expensive option that will be explored in this study (ITZ

Systems Engineering Database). Similar to NoMagic, the IBM tool can be used to create architectures and perform engineering analysis, trade studies, requirements verification, and metrics analysis. IBM’s tool is also useful for code generation and is compatible with

UML, SysML, OPM, and IDEF including other languages that have not been mentioned.

The IBM tool is the most robust option available.

As project managers plan the effort, in addition to the tools and cost, they also need to consider staff. This is one aspect that some models leave out, yet people are essential to the success of any project.

2.7 Staffing on MBSE Projects

There are three main approaches when considering staffing on a MBSE project.

The first is a man-power approach, the second is an experience approach, and the third is an educational approach.

32

The man-power approach is to hire as many individuals that the budget allows with less consideration towards training and education. This approach would be more ideal with methods such as OPM where the language and processes are simple to understand and the system is already known.

The training approach is to hire as many experienced architects as possible on a

MBSE project. This could be useful for projects with a high complexity, depth, and require prior knowledge of a system.

The last approach is leveraging education. In this approach, MBSE projects aim to hire individuals with the highest education in the field. This may be useful when implementing object-oriented methods and new innovations that do not rely on interfacing with existing systems.

Staffing, as a whole, takes into consideration all three aspects, but one should be mindful which aspects are more important based upon budget and the MBSE methods utilized. After all MBSE impact factors have been identified, a tool can be developed to navigate the various factors.

2.8 Conclusion

The literature research has demonstrated that the advantages of MBSE were well- known. However, there were conflicts regarding the different factors that impact MBSE.

Additionally, there were conflicts regarding the different components that make up each factor. Thus far, a tool has not been developed that demonstrates the different factors of

MBSE or how the different factors and its components should be applied to a project.

Since the implementation of MBSE is complex and requires many tiers of consideration, a specialized tool should be utilized to navigate the 5 pillars of MBSE. This research will

33 help to solidify the components of MBSE while providing a process for selecting an appropriate systems engineering language, framework, method, and tool based on varying project input factors. This in turn will demonstrate that success in MBSE requires tailorability, and if correctly implemented, program managers of smaller-scale projects can be confident that projects can receive all the benefits of MBSE without an associated schedule risk.

34

Chapter 3—Methodology

3.1 Methodology Overview

The fundamental methodology that was used for the identification and resolution of schedule impact factors affecting projects that implement MBSE was primarily based upon the five fundamental pillars and the associated components outlined in Chapter 2.

The primary four pillars, language, method, framework, and tool were categorized and assigned separate process flows based upon anticipated impact derived from literature review, past performance, and the author’s experience. The fifth pillar, people, had been included as an impacting factor within the other processes, but it was not assigned its own process flow because the scope of the research did not allow for much control over staffing. After the process flows were mapped, a decision support tool (DST) was developed based upon the , and 17 separate projects implemented the DST based upon the internal process flows. Data were collected before, during, and after implementation for further analysis, and the processes of the implementation and findings are discussed in this Chapter (Methodology) and in Chapter 4 (Results).

3.2 Hypothesis Formulation

The first foundation of resolving the MBSE associated schedule issue relied upon proper hypothesis formulation. Such hypothesis must allow for the research to demonstrate that the problem exists, show that a unique implementation will solve the problem identified, and ensure that both the problem and solution is measurable and able to be analyzed. The problem was first identified when the author’s employer, ITZ, started receiving several complaints from the Navy regarding the new MBSE Naval guidance and challenges that customers were experiencing when implementing the guidance. Upon

35 guidance review and dozens of hours consulting with local staff, a conclusion was reached that the issue was not with MBSE itself, but rather the implementation method combined with lack of tailorability. Evidence from analyzing company records indicate that the negative impact seemed to be primarily focused on smaller projects although there were insufficient data to statistically verify this. Regardless, given the limited scope of the research, smaller projects are more desirable because they typically had a shorter duration. The shorter duration allows for DST execution in a timeframe that has a manageable scope and faster results.

Once project size had been identified as an influencing factor, a system needed to be utilized to quantify project size. Number of function points (also known as system functions) was identified based upon use in the software and systems engineering communities to measure the functional size of a product.

The International Function Point User Group (IFPUG, 2020) defines a function point as “an internationally standardized unit of measure used to represent software size.”

A function point is a single discrete, unique function that a system would perform. In

1979, Allan Albrecht proposed the Function Point Analysis, which provided managers with probable size estimates of the development process. Function point analysis has been found useful to many developers because they are “technologically independent, consistent, repeatable, and help normalize data, enable comparisons, and set project scope and client expectations” (Furey, 1997, p. 28). Based upon usage in both the software and systems engineering community, function points seem to have been the most effective method for determining project size.

36

Furthermore, function points are also more desirable than other measurable components such as number of requirements, or number of entities (also known as

“blocks”) because other components can be more easily impacted by factors outside of complexity. A design requirement, for example, that mandates a color scheme would add to the requirement count but would not have an impact to product complexity. Number of blocks (systems, subsystems, components, etc.) could be used as a measure of project size but do not consider the interfaces between the blocks. For example, a use case involving a television remote with unidirectional communication would appear to have the same complexity as a use case involving a remote-control airplane with bidirectional communication. This is because in both cases, the number of blocks remains the same even though the airplane remote clearly has more functions when compared to the television remote.

Once function points had been identified as the ideal size factor, data was pulled from ITZ and Navy databases to identify 50 random projects by function point, separated by groups of 1000. Grouping was analyzed on a per-thousand basis due to proprietary data concerns. The following table highlights the results gathered:

Table 1. Project Function Point Distribution Assortment of Projects by Function Points 0- 1001- 2001- 3001- 4001- 5001- 7001- 9001- Function Point 10001+ Range 1000 2000 3000 4000 5000 6000 8000 10000 Number of 41 1 0 0 0 0 0 1 7 Projects Note: Derived from ITZ project repository and Navy Teamwork Cloud repository

Table 1 clearly shows a division between projects with 1000 or less function points and those above 1001; there are more projects in the 0-1000 function point range.

For this reason, 1000 function points was utilized as the limiting size factor for the study.

37

After determining the function points, the other limiting factor that needed to be discovered was project phase. Because small-scale projects took several years to traverse the project lifecycle, a single phase had to be chosen. Technology maturation was selected because it is the most architectural intense phase and thus has the largest probability to impact project schedule.

Once such factors have been identified, a strong hypothesis was developed to prove that a negative schedule exists, limited by number of function points and project phase. The first hypothesis, H1, is as follows: “If the Navy implements MBSE on projects with less than 1000 function points, then projects will fall behind schedule in the technology maturation phase when compared to projects using classical DBSE.” This hypothesis demonstrates that the problem exists in a manner that is measurable and was properly scoped to the study.

If H1 is confirmed, and the problem was proven to exist, then the second hypothesis, H2, will determine if implementation of a DST had a positive impact to schedule, given the identical scope. H2 is as follows: “If the Navy implements a DST for

MBSE method selection on projects with less than 1000 function points, then projects will increase adherence to schedule in the technology maturation phase.” Similar to H1,

H2 is measurable, and properly scoped to the study. The next factor that needed to be identified was sample size.

3.3 Navy Program Review

Before requesting permission to perform the study in the navy, all significant research attributes needed to be identified, as the process took several months between initial request and signature. The last remaining attribute that was considered prior to

38 requesting permission was sample size. Sample size was determined by a 2-Sample t-test at an alpha of 0.05.

Figure 7. 2-Sample t-Test Power Curve Note. Minitab Output, by Z Jenkins, ITZ-LLC, 2019.

Figure 7 shows the output for Minitab when analyzing 2-Sample t-test at an alpha of 0.05. The sample size identified is 17. Thus, 17 projects needed to be identified for each data set. Since there are three data sets, the total number of projects analyzed was

51. All 51 projects selected were executed by the government, with the government being the architecture design authority.

When selecting the projects, it was also important to ensure that all 51 projects had a consistent process for schedule deadline estimates. As such, all projects participating in the study were asked to submit planning estimates based upon official program objective memorandum (or equivalent) schedules that have been approved by the project manager, chief engineer, and the funding authority.

39

After establishing the research hypotheses and resolving the ideal sample qualifications and sample size for statistical significance, permission was obtained from the Naval program office to utilize Non-FOUO data and execute the DST on a statistically significant set of projects. H1 and H2 was presented to the Program

Executive Office of Unmanned Aviation and Strike Weaponry (PEO[U&W]) and permission was obtained (Appendix A). Once permission was obtained, DST formulation began.

3.4 DST Formulation

The first step to formulating the DST started with a rudimentary concept mapping of the primary pillars derived from literature review, past performance, and the author’s experience.

Figure 8. W, X, Y, and Z Concept Mapping Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2020.

40

Through concept mapping, the pillars of language, method, framework, and tool were categorized and assigned separate process flows based upon anticipated impact.

Figure 8 shows the categories of the pillars and the assigned variables for the purposes of programing the DST. Under this construct, each pillar was assigned a different variable and provided a scale from 0 to 100 and the different attributes were separated evenly on this scale due to natural progression. This is a potential area where the tool can be further refined in future studies because progression was averaged and not explicitly defined using past performance data. The range of 0 to 100 was chosen due to mathematical simplicity of attribute separation. A different scale such as 0 to 9 could have been used, but then dividing a category, such as tool for example, into equal ranges would have resulted in fractional values (e.g. 0-2.25, 2.26-4.5, etc.). If the category ranges are evenly spaced, the minimum and maximum value range will not impact the DST outcome.

However, the range of 0 to 100 provided a dataset that is more easily understandable.

The variable X was selected to represent method and was on a scale between 0 and 100. The components within method were then spaced evenly such that the output range of 0 to 20 represented ROSE, 21 to 40 represented OPM, 41 to 60 represented

SAD, 61 to 80 represented Vitech and 81 to 100 represented OOSEM. The assumptions made for even spacing of selections was based upon the literature review and subject matter expertise. As mentioned in the literature review, OOSEM is by far the most detailed MBSE method, while OPM and ROSE are less complex. The five methods analyzed were then ranked from lowest to highest based on project management agility and technological innovation. Once it was determined that OOSEM is more aligned with complex innovations than Vitech, Vitech more than SAD, SAD more than OPM, and

41

OPM more than ROSE based upon the literature review and subject matter expert input, the five components were spaced evenly on the scale as noted under Method in Figure 8.

Even spacing was utilized because although the stakeholders were able to determine ranking, it was difficult to determine any weighting that should be assigned to a particular method. For example, it was determined that OOSEM was better than Vitech for Agile systems, but it was not clear by how much. Accordingly, even spacings were assigned as a sort of best guess; however, the ranges could be tailored in the future to further refine the DST.

The next variable, Y, represented framework and was on a scale between 0 and

100. The components within framework were then spaced evenly such that the output range of 0 to 33 represented DoDAF/MODAF, 34 to 66 represented UPDM, and 67 to

100 represented UAF. The assumptions made for even spacing of selections was based upon the literature review and subject matter expertise. As mentioned in the literature review, UAF is by far the most detailed MBSE framework, while DoDAF is less complex. The three methods analyzed were then ranked from lowest to highest based on system complexity and technological innovation. Once it was determined that UAF is more aligned with complex innovations than UPDM, and UPDM more than DoDAF or

MODAF, literature review and subject matter expert input was utilized to space the five components evenly on the scale as noted under Framework in Figure 8. Even spacing was utilized because although the stakeholders were able to determine ranking, it was difficult to determine any weighting that should be assigned to a particular framework. For example, it was determined that UAF was better than DoDAF for systems with higher layers of abstraction, but it was not clear by how much. Accordingly, even spacings were

42 assigned as a sort of best guess; however, the ranges could be tailored in the future to further refine the DST.

The variable Z represented language and was on a scale between 0 and 100. The components within language were spaced evenly such that the output range of 0 to 25 represented OPM, 26 to 50 represented IDEF, 51 to 75 represented UML, and 76 to 100 represented SysML. The assumptions made for even spacing of selections was based upon the literature review and subject matter expertise. As mentioned in the literature review, SysML is by far the most detailed MBSE language, while OPM and IDEF are less complex. The four languages analyzed were then ranked from lowest to highest based on system complexity and learning curve. Once it was determined that SysML is harder to learn than UML, UML more than IDEF, and IDEF more than OPM based upon the literature review and subject matter expert input, the four components were spaced evenly on the scale as noted under Language in Figure 8. Even spacing was used because although the stakeholders were able to determine ranking, it was difficult to determine any weighting that should be assigned to a particular language. For example, it was determined that IDEF was better than UML for less experienced architects, but it was not clear by how much. Accordingly, even spacings were assigned as a sort of best guess; however, the ranges could be tailored in the future to further refine the DST.

The variable W represented tool and was on a scale between 0 and 100. The components within tool were then spaced evenly such that the output range of 0 to 25 represented Papyrus, 26 to 50 represented Sparx EA, 51 to 75 represented NoMagic, and

76 to 100 represented IBM. The assumptions made for even spacing of selections was based upon the literature review and subject matter expertise. As mentioned in the

43 literature review, the IBM tool is the most expensive and difficult tool to learn, while

Papyrus is free and less complex. The four tools analyzed were then ranked from lowest to highest based on cost and learning curve. Once it was determined that the IBM tool suite is harder to learn than NoMagic products, NoMagic more than Sparx EA, and Sparx

EA more than Papyrus based upon the literature review and subject matter expert input, the four components were spaced evenly on the scale as noted under Tool in Figure 8.

Even spacing was used because although the stakeholders were able to determine ranking, it was difficult to determine any weighting that should be assigned to a particular tool. For example, it was determined that Sparx EA was better than NoMagic for less experienced architects, but it was not clear by how much. Accordingly, even spacings were assigned as a sort of best guess; however, the ranges could be tailored in the future to further refine the DST. The fifth pillar, people, was included as an impacting factor within the other processes, but it was not assigned its own process flow because, as mentioned previously, the scope of the research did not allow for much control over staffing.

Once the high-level concept was mapped, the different variables were organized into process flows. MBSE was utilized to develop the flows so that they could be evaluated within the local community. The flows were thus constructed in Unified

Modeling Language (UML) due to its convertibility into software code, as well as common understanding in the Naval community. The process flows were converted into

Java code for ease of DST implementation and can be viewed in Appendix B.

The first variable mapped was method (X). Method includes the consideration of the percent of new innovations, percent of preexisting requirements, project management

44 style, and staffing restrictions. These four components were selected by subject matter experts within the government based upon the literature review in this study and engineering knowledge as attributing factors to determine project management agility and technological innovation. Contributing factors that could not be measured were not included in the process flow. Once the four components were selected, they were each assigned up to 25 possible points, totaling the 100 depicted on the scale in Figure 8.

Equal spacing was utilized because although the factors were identified, weights could not be assigned. Accordingly, even spacings were assigned as a sort of best guess; however, the ranges could be tailored in the future to further refine the DST. A process flow was then created for method.

Figure 9. Method (X) Process Flow Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2020.

45

The UML process flow identified in Figure 9 graphically represents the DST code for method discovery. The process flow steps through a set of rules and formulas that must be followed to calculate the final X value, which would fall between the ranges defined in Figure 8 and can be correlated to a final method recommendation. The process began by evaluating the number of function points. This can be observed by starting with the solid circle that is labeled start on the diagram and following the arrows, as appropriate, until the target labeled end is reached. Every formula that is passed as one follows the arrows based on the guard conditions should be used to calculate the final value. The process in the diagram starts with the number of function points. If there were more than 1000 function points, then the DST would not be used because it is out of scope for this study. This can be observed as following the arrows would cause X to equal

0 and the end target node would be reached. In this sequence, the DST would end and none of the other calculations would be performed. If the function points were less than

1000, then following the arrows would result in a user data request regarding whether a preexisting architecture existed. This would be the case, if, for example, the project is an upgrade to an existing system. If a preexisting system did exist, the DST would request information regarding the percent of preexisting requirements (variable b). If a preexisting architecture did not exist, then the DST would request information regarding the percent of new innovations (variable a). As mentioned, the percent of preexisting requirements and the percent of new innovations were selected by subject matter experts within the government based upon the literature review in this study and engineering knowledge as attributing factors to determine project management agility and technological innovation. The calculations associated with variables a and b (X = X +

46

[a/4]), and X= X+[100-b]/4, respectively) would be utilized to discover the amount of new technology and design reuse. In this sense, the percentages, variables a and b, refine the associated ranges, and the division of 4 prevents the maximum output value from exceeding 25. In the Navy, a majority of older systems utilized DBSE, so if the system was an upgrade requiring interface with the existing architecture, a method such as ROSE or OPM would be more desirable due to its ability to interface with DBSE processes.

This saves time because less architecture would be recreated due to reuse. Engineering integrity is still preserved because, as discussed in the literature review, DBSE or MBSE hybrid methods such as ROSE would still be effective if it is within the bounds of the established method. For this reason, additional information still needs to be considered.

The next attribute within method was project management style. If, for example, one is using agile, as discussed in the literature review, processes higher on the method scale such as OOSEM are more effective. For this reason, the calculations for agile increase the X scale by 25 (X = X + 25) or somewhere in between if the project is a hybrid. As mentioned, agile usage was selected by subject matter experts within the government based upon the literature review in this study and engineering knowledge as an attributing factor to determine project management agility and technological innovation. The government data only categorizes projects as being Agile, Non-agile, or

Hybrid. In the process flow, 25 points is assigned for Agile projects as a maximum contribution towards project management agility and technological innovation, 0 points is assigned for Non-agile projects as a minimum contribution towards project management agility and technological innovation, and 12.5 points is assigned for hybrid Agile projects as an in-between contribution towards project management agility and technological

47 innovation. Even spacing was utilized as a best-guess since the government data did not provide the exact percent of Agile usage.

The last variable for method considered the fifth pillar, people. The percent of new or untrained staff (variable c) has an impact on method selection. As discussed in the literature review, more complex methods such as OOSEM and Vitech have a higher adaptation resistance due to fundamental engineering changes and method complexity. A large group of untrained staff, for example, would be more effective in modeling OPM rather than OOSEM, particularly if the modeling effort is still within the bounds of the established method. For this reason, increasing the percent of untrained staff should have less impact to the resulting X value; in addition, a division of 4 must be utilized so that the maximum possible contribution is 25. Accordingly, the following formula is used: X

= X + (100-c) /4. The final resulting X value then correlates to the range previously defined. For example, if the final X value were calculated to be 91, then OOSEM would be selected as the recommended method. It is especially important to carefully follow the process flows (or run the DST software) to calculate an accurate final result.

All the variables such as project size, function points, existing technology, management style, and staffing, are thus inputs to method calculation. The different calculations utilized are based upon the research performed in literature review, government input, and the author’s experience. Effectiveness of this process flow would only be realized after confirmation of H2. Once the method had been identified, the next variable, framework (Y) could be calculated.

Framework calculation includes the consideration of layers of abstraction and the percent of new innovations. These two components were selected by subject matter

48 experts within the government based upon the literature review in this study and engineering knowledge as attributing factors to determine system complexity and technological innovation. Contributing factors that could not be measured were not included in the process flow. Once the two components were selected, they were each assigned up to 50 possible points, totaling the 100 depicted on the scale in Figure 8.

Equal spacing was utilized because although the factors were identified, weights could not be assigned. Accordingly, even spacings were assigned as a sort of best guess; however, the ranges could be tailored in the future to further refine the DST. Once the components have been selected and assigned dependencies and ranges, a process flow for framework was created.

Figure 10. Framework (Y) Process Flow Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2020.

49

The UML process flow identified in Figure 10 graphically represents the DST code for framework discovery. The process flow steps through a set of rules and formulas that must be followed to calculate the final Y value, which would fall between the ranges defined in Figure 8 and can be correlated to a final framework recommendation. Like method, the framework process started by ensuring the function points were less than

1000 due to study scope limitations. The next consideration involved the output from method (x). As discussed in the literature review, if ROSE or OPM were selected, only

DoDAF/MODAF could be used because UAF and UPDM do not have enough flexibility in the framework to allow for methods that were not object-oriented. This is achieved in the process flows by hard-setting the Y value to 1 if X is less than 60. This can be observed by following the arrows starting with the solid circle labeled start. After the function point decision is passed, the arrows lead to two guard conditions: one that continues the process flow, and another that hard-sets the Y value to 1 and results in a final recommendation of either DoDAF or MoDAF. If an object-oriented method were selected, the framework calculation continues by requesting the number of model layers of abstraction (variable d). Layers of abstraction were used to help quantify depth. In the

Navy, typical models range between 3 and 5 layers of abstraction. If models were expected to be less than this range, they lack depth, and were considered to be simpler, so classic frameworks such as DoDAF would be utilized. If models were higher than the range, then they would have a larger depth which influences complexity, and thus a more robust framework such as UAF would be utilized. This would be further proven through confirmation of H2. The DST accounts for the layers of abstraction by increasing the Y value by either 0, 25, or 50 depending on complexity. This range was chosen as a scaling

50 function between 0 and 50 to limit the maximum contribution, and even spacing between layers 3, 5, and 6 was selected based upon subject matter expertise and historical project data within the government. The last factor in considering framework calculations reutilized variable a for the percent of new innovations to add to the complexity calculation. As discussed in the literature review, as the percent of new innovations increase, the Y value should also increase, which results in a more object-oriented framework. Additionally, a division of 2 should be utilized to set the maximum point contribution to 50. The formula utilized for the DST was thus Y = Y + (a/2). Lastly, like all process flows, the output can be “hard-set” if desired, due to interoperability constraints.

Once the framework is discovered, the next variable, language (Z) could be calculated. Language calculations include the percent of new or untrained staff, as well as prior dependencies. Contributing factors that could not be measured were not included in the process flow. Equal spacing was utilized because although the factors were identified, weights could not be assigned. Accordingly, even spacings were assigned as a sort of best guess; however, the ranges could be tailored in the future to further refine the DST. Once the components have been selected and assigned dependencies and ranges, a process flow for framework was created.

51

Figure 11. Language (Z) Process Flow Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2020.

The UML process flow identified in Figure 11 graphically represents the DST code for language discovery. The process flow steps through a set of rules and formulas that must be followed to calculate the final Z value, which would fall between the ranges defined in Figure 8 and can be correlated to a final language recommendation. Like framework, the language process started by ensuring the function points were 1000 or less due to study scope limitations. The next step was to identify dependencies. As discussed in the literature review, the OPM method must align with the OPM language, because OPM is both a language and a method. Similarly, if ROSE was selected, then

IDEF must be utilized. This is accounted for in the DST by following the arrows in the process flow. X values between 0 and 20 (ROSE) would hard-set the final Z value to 26

(which would result in IDEF). Similarly, X values between 21 and 40 (OPM) would hard- set the final Z value to 1 (which would result in OPM). After dependencies were considered, the next attribute reused variable c, which considered the people pillar by

52 estimating the percent of untrained staff. The formula used to calculate this portion was Z

= Z + 49 - (c*0.49) the maximum possible value of 49 was utilized based upon the remaining possible points after all dependencies were considered. Using a declining scale for untrained staff is important because as the language scale increases, so does complexity and adaptation resistance. If a project had a large group of untrained staff, for example, it would be more effective to use the IDEF language rather than SysML, but only if the modeling effort was still within the bounds of the selected language.

Once the framework was discovered, the last variable, tool (W), could be calculated. Tool calculations include the consideration of layers of abstraction, the percent of new innovations, and the percent of untrained staff. These three components were selected by subject matter experts within the government based upon the literature review in this study and engineering knowledge as attributing factors to determine learning curve requirements. Contributing factors that could not be measured were not included in the process flow. Once the three components were selected, the percent of new innovations and layers of abstraction were each assigned up to 50 possible points, totaling the 100 depicted on the scale in Figure 8. Equal spacing was utilized because although the factors were identified, weights could not be assigned. The percent of untrained staff was then utilized as a final deprecating factor which has the most weight.

This is because, as mentioned in the literature review, tool efficiency is most heavily dependent on the staff’s ability to learn the tool. A highly untrained staff, for example, should have the ability to override other factors. As such, the process flow for tool was created.

53

Figure 12. Tool (W) Process Flow Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2020.

The UML process flow identified in Figure 12 graphically represents the DST code for tool discovery. The process flow steps through a set of rules and formulas that must be followed to calculate the final W value, which would fall between the ranges defined in Figure 8 and can be correlated to a final tool recommendation. Like language, the tool process started by ensuring the function points were less than 1000 due to study scope limitations. Languages that are not object-oriented are hard-set to a particular tool.

In contrast, the object-oriented languages have wider tool adaptation and thus follow the entire process flow. OPM for example, must use Papyrus, and IDEF must use the IBM product. This was achieved in the process flow by utilizing the prior calculated language values (Z). Following the process flows indicate that if Z is between 0 and 25 (OPM) the

W value is hard-set to 1 (Papyrus). Similarly, if Z is between 26 and 50 (IDEF), the W value is hard-set to 76 (IBM). Once dependencies were resolved, the next step is to estimate depth by reusing variable d for layers of abstraction and variable a for the

54 percent of new innovations to estimate complexity. Layers of abstraction was calculated increasing the W value by either 0, 25, or 50 depending on complexity. This range was chosen as a scaling function between 0 and 50 to limit the maximum contribution, and even spacing between layers 3, 5, and 6 was selected based upon subject matter expertise and historical project data within the government. As discussed in the literature review, as the percent of new innovations increase, the tool complexity or W value should also increase, which results in a more sophisticated tool. Additionally, a division of 2 should be utilized to set the maximum point contribution to 50. The formula utilized for the DST was thus W = W + (a/2). The concept is that the higher-end tools tend to support more complex architectures. However, the higher-end tools also tend to be more complex themselves and require a greater learning curve. For this reason, the last variable, c, was reutilized by W = (W-c), as a final deprecating factor which has the most weight. The thought is that individuals would be able to effectively model in Sparx EA quicker than the NoMagic Product; however, this would only work if the architecture is not too complex for the tool. Such assumptions would be further proven after confirmation of

H2.

From a tool perspective, it was also important to consider cost. By going higher in the tool hierarchy, additional costs typically occur. These costs can add up when considering equipping an entire engineering team or sector. However, for the purposes of this study, cost was not considered because it was out of research scope. In future studies, it would most likely be advantageous to include cost in the DST calculations.

Once all the process flows for method, framework, language, and tool had been considered, the process flows were then reviewed by engineering staff at ITZ and the

55

Navy. The process flows in figures 9-12 were the actual flows utilized to implement the

DST on the 17 identified projects in Appendix A. The process flows were also converted into software code (in Appendix B) for ease of implementation. A sample screenshot of the compiled software is also included in Appendix C. The software was developed based upon the process flows and was not entirely necessary for DST execution. However, the software made the process easier to implement for those less familiar with MBSE practices or the UML language.

After DST development, the software tool was sent to the Navy to analyze. Upon acceptance, existing project data were gathered and analyzed.

3.5 Additional Data Gathering

To test the hypotheses H1 and H2, three sets of 17 projects were identified, at random, based upon similar characteristics. The first data set consisted of 17 projects with less than 1000 function points in the technology maturation phase that implement

MBSE without the use of the DST, the second data set consisted of 17 projects with less than 1000 function points in the technology maturation phase that implement DBSE without the use of the DST, and the last data set consisted of projects with less than 1000 function points in the technology maturation phase that implement MBSE with the use of the DST. All three data sets are represented in Appendix D.

Projects selected for use in this study were similar in size, scope, phase, and industry. As mentioned previously, all projects were less than 1000 function points, putting them in similar size categories, and project scope was compared to ensure relative goal alignment. For example, extreme niche projects that were funded by the Navy, but not directly applicable to the Navy mission (such as installing cameras on the southern

56 border of New Mexico), were omitted. All projects identified were for the specific goal of either developing or enhancing a Naval capability. Project phase was limited to technology maturation so it may be aligned with the scope of this study, and the industry, as mentioned, was internal to the Navy.

To properly represent schedule deviation on projects with varying timelines, all deviation data were normalized with the following formula: (Ta / Tp) – 1. Whereby Ta was the actual time as a measure of days or weeks (depending on data provided) and Tp was the planned time as a measure of days or weeks (depending on data provided). The data had to be normalized for analysis, and screened to ensure that it was suitable for public access. Based on the formula, a positive deviation represented a schedule slip

(project finished later than expected), and a negative deviation represented a schedule advance (project finished earlier than expected).

Once data (as indicated in Appendix D) had been collected, initial analysis can begin to determine the best method to quantify the results.

3.6 Analysis Methods

The analysis methods selected were based primarily on the data gathered. For example, the first step was to test for normality and variance to determine if a 2-Sample t- test or a Mann-Whitney and Kruskal-Wallis test would be used to evaluate significance and accept or reject the two hypotheses. A normal probability plot was used to determine if there is goodness of fit between sample data and a normal distribution. Additionally, a variance test was performed to determine if there was a gap in equal variances and whether or not the gap is resolved once the DST had been implemented.

57

After normality and variance were determined, the first test was to discover significance and accept or reject the two hypotheses. This was achieved by comparing the medians of the data samples at a sensitivity of alpha = 0.05. The first two data sets that were used to confirm H1 are the set of 17 projects that implemented MBSE without the use of the DST, compared to the set of 17 projects that implemented DBSE without the use of the DST. If the p-value were less than 0.05 then the first hypothesis would have been confirmed.

To confirm the second hypothesis, two separate tests for significance were performed. The first test compared the set of projects that implemented DBSE without the use of the DST to projects that implemented MBSE with the use of the DST. If the p- value were greater than 0.05, then one would fail to conclude that the two sets are significantly different; however, the second hypothesis would not be confirmed. The second hypothesis would only be confirmed after the second test for significance, which compares the set of projects that implement MBSE without the use of the DST to projects that implement MBSE with the use of the DST. If the p-value of this test was less than

0.05 and the averages were compared to show improvement, then the second hypothesis would have been confirmed.

3.7 Methodology Summary

Confirmation of both hypotheses proved success to the fundamental methodology behind this research, which involved utilizing the primary five pillars of MBSE and incorporating them into process flows. After the process flows were mapped, a decision support tool (DST) was developed based upon the software architecture. Three sets of 17 separate projects were selected, normalized, and analyzed for normality. Significance

58 tests were performed to determine if the hypotheses and research questions were answered through this study. Additional data collected before, during, and after implementation were also evaluated for further understanding of the results.

59

Chapter 4—Results

4.1 Introduction

To examine the results, a final set of data consisting of 17 MBSE projects that implemented the DST was collected and compared to 17 projects that implemented the

MBSE without the use of the DST and 17 projects that implemented DBSE without the use of the DST. The results were analyzed to conclude that both research hypotheses were confirmed, and DST implementation had a significant impact on schedule success of small-scale systems engineering projects. Throughout the chapter, DST implementation data were presented and interpreted, hypotheses were analyzed, and research questions were answered. Additionally, three case studies were selected and explored to aid understanding of DST results. This was achieved by explaining in detail how the example projects implemented the DST, the resulting schedule impact, and feedback from the staff regarding the execution process and any advantages or disadvantages experienced by participating in this study. Ultimately, the 17 projects identified experienced a NET-positive impact by implementing the DST; however, some individuals in the community hold reservations regarding the DST’s applicability on larger-scale projects or industries outside of the Navy.

4.2 DST Implementation Data

The DST was tested against 17 projects as previously discussed; however, only 15 data points were collected based on direct implementation due to a change in portfolio level priorities and project scope creep. The Vehicle Handover project, for example, was one of the 17 projects originally selected to implement the DST, but the entire project was put on hold due to a change in funding. In order to compensate for the two missing

60 data points required for statistical significance, two past projects were selected based on

DST applicability. For example, if the two projects were to have implemented the DST in the past, the outcome would have been the same because the method framework, language and tool selections would have been the same. Additionally, the data set with the selected projects still passed the significance tests even though they negatively impact

DST performance. Without the additional two projects, the study performed better, which is demonstrated in Section 4.3 Analysis Results. Considering this change, all 17 DST data points are represented below. The two past projects previously mentioned are noted as

Past Project 1 and Past Project 2:

Table 2. DST Implementation Data Project Name Schedule Deviation Motion Imagery Capability 8% Still Imagery Capability 0% Project G 33% Standard CDL Profile -23% Bandwidth Efficient CDL Profile -12% Project H 0% Sensor C2 Handover Capability 5% ICWG and CM Capability -33% Project I 15% Tracks Schema -6% Project K 0% Project L 25% Standard C2 Profile 0% Advanced C2 Profile 0% Advanced C2 Schema 0% Past Project 1 12% Past Project 2 25% Note: Derived from ITZ project repository

Table 2 describes the projects that implemented the DST. Schedule deviation numbers were calculated based on the schedule formula discussed in the Methodology

61

Chapter ([Ta / Tp] – 1) and rounded to the nearest percentage point. Actual project names are displayed based on the release permission letter in Appendix A. However, the name of projects that were not explicitly listed have been changed to a letter (e.g. Project K) to ensure confidentiality. After the final DST data had been gathered, the results were analyzed based on the methods described in Chapter 3.

4.3 Analysis Results

Based on the analysis method, the first step was to test for normality and equal variance to determine the type of equivalence test that should be used to measure significance and accept or reject the two hypotheses.

Figure 13. Equal Variance Test Results Note. Minitab Output, by Z Jenkins, ITZ-LLC, 2020.

In Figure 13, the test demonstrates equal variance. This can be observed because the Levene’s p-value is less than 0.05 and the intervals show overlap. The next step is to test for normality which can be displayed in a graph.

62

Figure 14. Normality Test Results

Note. Minitab Output, by Z Jenkins, ITZ-LLC, 2020.

As Observed in Figure 14, all three data sets are not in the form of a normal distribution. The probability plot of DBSE without use of the DST has a p-value less than

0.05, and the probability plot of MBSE without use of the DST has a p-value less than

0.05. The probability plot of MBSE with the use of the DST has a p-value greater than

0.05, however, observation of the probability plot illustrates heteroscedasticity. For these reasons, it can be concluded that a two-sample t-test should not be utilized to determine significance.

63

Due to normality and variance, the Mann-Whitney and Kruskal-Wallis tests were selected to determine significance to accept or reject the two hypotheses. This was achieved by comparing the medians of the data samples at a sensitivity of alpha = 0.05.

The first hypothesis was tested by comparing 17 projects with less than 1000 function points in the technology maturation phase that implement MBSE without the use of the

DST, to 17 projects with less than 1000 function points in the technology maturation phase that implement DBSE without the use of the DST. The results are noted in the four attributes below:

1. MBSE without DST median: 0.25

2. DBSE without DST median: 0.00

3. Mann-Whitney p-value adjusted for ties: 0.04

4. Kruskal-Wallis p-value adjusted for ties: 0.04

The test results show that the p-value of 0.04 is less than 0.05. Thus, the null hypothesis is rejected since a significant difference exists. Additionally, the MBSE without DST median is 0.25. This means that projects implementing MBSE without the use of the DST has a tendency to slip by 25 percent past schedule deadlines, compared to projects implementing DBSE without the use of the DST where projects have a tendency to meet deadlines.

Based on the test results, H1 has been confirmed. If the Navy implements MBSE on projects with less than 1000 function points, then projects will fall behind schedule in the technology maturation phase when compared to projects using classical DBSE.

Testing the second hypothesis was achieved using two separate Mann-Whitney and Kruskal-Wallis tests. The first test compared 17 projects with less than 1000 function

64 points in the technology maturation phase that implement DBSE without the use of the

DST, to 17 projects with less than 1000 function points in the technology maturation phase that implement MBSE with the use of the DST. The results are noted in the seven attributes below:

1. MBSE with DST 17-project median: 0.00

2. MBSE with DST 15-project median: 0.00

3. DBSE without DST median: 0.00

4. Mann-Whitney 17-project p-value adjusted for ties: 0.99

5. Mann-Whitney 15-project p-value adjusted for ties: 0.66

6. Kruskal-Wallis 17-project p-value adjusted for ties: 0.97

7. Kruskal-Wallis 15-project p-value adjusted for ties: 0.65

The test results show that the 17-project p-value and the 15-project p-value is greater than 0.05. Thus, the null hypothesis has failed to be rejected and a significant difference has failed to be proven to exist. Additionally, the MBSE with DST median is

0.00. This means that projects implementing MBSE with the use of the DST tend to meet schedule deadlines, similar to projects implementing DBSE without the use of the DST.

Unfortunately, MBSE with the DST did not perform better than DBSE without the DST.

However, there was improvement over the existing method (MBSE without the DST) which was confirmed with another Mann-Whitney and Kruskal-Wallis test.

The next test compared 17 projects with less than 1000 function points in the technology maturation phase that implement MBSE without the use of the DST, to 17 projects with less than 1000 function points in the technology maturation phase that

65 implement MBSE with the use of the DST. The results are noted in the seven attributes below:

1. MBSE with DST 17-project median: 0.00

2. MBSE with DST 15-project median: 0.00

3. MBSE without DST median: 0.25

4. Mann-Whitney 17-project p-value adjusted for ties: 0.03

5. Mann-Whitney 15-project p-value adjusted for ties: 0.02

6. Kruskal-Wallis 17-project p-value adjusted for ties: 0.03

7. Kruskal-Wallis 15-project p-value adjusted for ties: 0.02

The test results show that the 17-project p-value of 0.03 and the 15-project p- value of 0.02 is less than 0.05. Thus, the null hypothesis is rejected, and a significant difference exists. In this case, the 15-project data set demonstrated better schedule performance than the 17-project data set. This is because the additional two projects had a negative schedule impact. Even with the two negatively impacting projects included, the 17-project data set still passed the Mann-Whitney and Kruskal-Wallis tests resulting in a P-value of 0.03. Because a significant difference exists with both data sets, projects implementing MBSE without the use of the DST have a tendency to slip by 25 percent past schedule deadlines, compared to projects implementing MBSE with use of the DST where projects have a tendency to meet deadlines. Since the Navy has mandated the use of MBSE and DBSE is no longer an option, the DST has proven to be a useful tool that can be used to get projects back on track.

66

Based on the test results, H2 has been confirmed. If the Navy implements a DST for MBSE method selection on projects with less than 1000 function points, then projects will increase adherence to schedule in the technology maturation phase.

With both hypotheses confirmed, this study was successful. In addition, a few case study examples are presented to demonstrate how the DST was actually executed on a project. These case studies also illustrate DST results and answer research questions.

4.4 C2 Sensor Case Study Overview

The first Case Study to be analyzed is the C2 Sensor Handover Capability. The goal of this project was to hand over control of an unmanned aircraft sensor from one authorized control station to another authorized control station. The specific aircraft and sensor configurations will not be mentioned so that this study remains publishable.

The first step that occurred for DST implementation was to determine if this project was applicable to the study. The following 6 characteristics were collected and gathered:

1. Program Name: Program A

2. Project Name: C2 Sensor Handover Capability

3. Number of Function Points: 189

4. Project Stage: Technology Maturation

5. Type of Project: Systems Engineering

6. Planned Schedule: 92 Days.

The program name and project name are not relevant; however, program name had been modified to “Program A” for obscurity reasons. The number of function points

67 had been predefined in the prior solutions analysis phase of the project. Because the number of function points were between 0 and 1000, this met the function point qualifications for this study. The project was also transitioning into the Technology

Maturation Phase, so this meets the project stage qualifications for this study. The project type qualifications for this study were met because this was a systems engineering project. Finally, the planned schedule was also within scope of this study because 92 days fit the study window of May 1st, 2020 through September 1st, 2020. This project actually completed the material solutions analysis phase and was ready to start the technology maturation phase on April 17th, 2020, but was purposely put on hold for a month in order to qualify for this study in expectation of DST completion on May 18th, 2020. Since this project met all qualifications for this study, approval was received from the project manager to begin DST execution once the DST was finalized.

The actual DST code was not yet complete on May 18th, so the logic flows were submitted for implementation instead of the Java software program. This, however, was not an issue because the DST software was programmed based on the DST logic flows, so by implementing the logic flows, the project was executing the DST. In order to implement the logic flows, the following 9 input variables were gathered:

1. Number of Function Points: 189

2. Percent of New Innovations: 60

3. Percent of Preexisting Requirements: 80

4. Project Management Style: Agile

5. Percent of New or Untrained Staff: 0

6. Model Layers of Abstraction: 3

68

7. Preexisting Architecture: None

8. DoDAF or MoDAF Preferred: DoDAF

9. Interoperability Requirements: IBM Tool Usage

Identification of the 9 variables above was critical for executing the DST process flows. Based on the input variables, the first step for following the process flows was method calculation. The first calculation for method was X = X+(60/4) = 15. This formula was utilized because a preexisting architecture did not exist, and the project manager estimated that the percent of new innovations was 60 percent. The next calculation for the value of X was X = X+(100-80)/4 = 15 + 5 = 20, this was based on the percent of preexisting requirements. The next calculation was X = X + 25 = 20 + 25 = 45 based on the project management style being agile. The last calculation was X = X+(100-0)/4 = 45

+ 25 = 70 based on the percent of new or untrained staff. Since the final X variable was

70, the output method selection for the DST was Vitech.

Because of the previously calculated X value and the estimated layers of abstraction, the final process flow for framework calculation was simplified to Y = Y +

(60/2) = 30 based on DST process flows. A final Y value of 30 with no interoperability requirements indicated that the final output Framework should be either DoDAF or

MoDAF. Because the project manager selected that DoDAF was preferred, then the final

DST output framework was DoDAF.

Language selection began with the previously calculated X value, which was 70, so the Z value was thus hard-set to 51 as a starting point. The next calculation considered the percent of untrained staff, which was 0. Thus Z = Z + [49-(0*49)] = 51 + 49 = 100.

69

With a final Z calculation of 100, and no predefined interoperability requirements, the

DST selected language output was SysML.

To calculate the tool output, the DST process started with the previously calculated language (Z) value; however, the project lead identified an interoperability requirement that recommended the use of IBM, which overrode the DST tool selection by defaulting the W value to 76. Based on all calculations, the final DST output variables were as follows: Vitech, DoDAF, SysML, IBM. This was a drastic change from normal operations because if the project were to follow the original NAVY mandate, variables used would have been: OOSEM, UAF, SysML, NoMagic. By giving the project flexibility to implement the DST, 3 of the 4 identified variables had changed. Once the

DST output variables were defined, the next step was for the project to implement the

DST based on the previously defined output variables.

4.5 C2 Sensor Case Study Execution

The project was initialized with the creation of a context diagram in accordance with the Vitech method to understand the scope of the work and provide a high-level viewpoint that acted as the combined aggregate of all subordinate functions. This context diagram was satisfied through the use of the Systems Resource Flow Description as described by the DoDAF framework. Although DoDAF provided “the principles of obtaining architecture data, it [did] not explain the specific and operational guidance to users” (Zhang et al., 2020, p. 33445). For this reason, a framework must be paired with a language and method to fully understand how to develop a specific view. To satisfy the

Vitech method, the architects ensured that the Systems Resource Flow Description was developed in the “Architecture Domain” in a manner that allows for ease of linking to the

70

“Requirements Domain” at the same layer of abstraction. The first category is managing comms (radio communications), which accounted for establishing voice and data connections. The next category involved initiating the handover sequence, which accounted for handover authentication, sensor configuration, and relinquishing control from the supervising entity to the supervised entity. Once control of the sensor was transferred, the sensor was managed by the supervised entity. Managing control of the sensor included sensor specific functions (such as tilt, zoom, and pan, if the sensor were a camera). Once utilization of the sensor was complete, control of the sensor must be returned to the supervising entity. Lastly, there was a category identified for potential sensor failures and error handling. All categories were represented by SysML blocks, and the associated attributes were linked to the high-level requirements which follows the

Vitech method.

Once the context diagrams and requirement hierarchies were complete and linked in the underlying database at the same level of abstraction, the next step that the project took was to follow the DST and develop functional context mappings. Additionally, these diagrams were created following the Vitech method, in SysML, using the IBM tool, and utilized the Systems Functionality Description in accordance with the DoDAF

Framework. The chain of activities for managing comms was to start with initializing a connection for both the supervising and supervised entities and configuring antennas on both sides. Once the antennas were configured, a radio heartbeat sequence is established, and a connection status message was sent.

It is important to note that each activity was further decomposed into sub- activities, which were not displayed due to distribution restrictions. However, the next

71 level of functional decomposition is not necessary to understand how the DST was applied on the project.

The last major set of architectural views after system function decomposition was to verify and validate the system functions by developing a set of diagrams that map each message requirement and trace it to an associated function in sequential order. These messages satisfied the derived message requirements and were traced to both the requirement hierarchy architecture and the system functionality description. This modeling task was performed to act as a verification function according to the DST.

Similarly, the view was developed in SysML, using the IBM toolset, and the Systems

Event-Trace Description followed the DoDAF framework.

After final modeling reviews confirmed that all requirements were satisfied, the final requirements document was generated and incorporated in a Technical Data

Package (TDP), which was passed over to the engineering manufacturing and development phase. Handing over the TDP concluded the systems engineering effort for the technology maturation phase on the C2 Sensor Handover project.

4.6 C2 Sensor Case Study Results

Once the Technology Maturation stage was finished, data were collected, and project staff was interviewed. The following results highlight the change in data collected:

1. Function Points at Complete: 225

2. Project Stage: Technology Maturation

3. Type of Project: Systems Engineering

4. Schedule at Complete: 97 Days.

72

The data at complete demonstrate a slight change from the initial variables. The number of function points, for example, went from 189 at start to 225 at complete. This was likely due to the fact that the architecture uncovered gaps that were not originally identified in the prior stage. Additionally, there was a slight schedule slip, from the originally planned 92 Days to the actual result of 97 days at completion. Following the formula used for this study to measure schedule slip, it has been determined that the final schedule performance for this project was (97/ 92) – 1 = 5 percent slippage (rounded to the nearest percent). This data value was then documented as part of the overall study.

Although there was a 5 percent schedule slip, this still shows improvement from the original average of 25 percent slippage without the use of the DST. This one case highlights the success of DST implementation, which was further proven during the

Mann-Whitney and Kruskal-Wallis analysis of all 17 projects. The most likely cause of the 5 percent schedule slip was due to technical conflict involving the method of authentication. Some subject matter experts thought that the authentication would occur at the sensor, and others thought authentication would occur at the control station. The original documentation provided did not state which entity would perform the authentication, so the government had to ask the vendor which method would be preferable. The vendor then took several days to get back to the government with the final results, which delayed the project. It is arguable, however, that had this effort utilized DBSE, the authentication dilemma would have not existed because no one would have noticed the issue. Authentication would have been analyzed in text form instead of architectural, which clearly passed the material solutions review. In this case, the project would not have occurred a schedule slip in the technology maturation phase. However,

73 the impacts to the overall capability would have been higher because this issue would have arisen during code development. Fixing bugs later in the project would have costed the government a lot more time and effort because an Engineering Change Request

(ECR) would have to be filed.

Project staff was interviewed and the lead systems engineer, Fleming Paredes of

NAVAIR stated that “The use of MBSE and the DST on this project was critical for on time completion, and uncovered requirement gaps that otherwise would not have been identified.” The Project Lead mentioned that MBSE and the DST seemed to have helped the overall project, but he had concerns regarding applicability in other areas.

Specifically, there were concerns that the DST does not address architectural implementation after the project finishes the technology maturation stage. Additionally, some subject matter experts who were utilized to review the models felt as though the

SysML nomenclature was confusing. They recommended that the reviewing staff should be a factor to include in future releases of the DST. Overall, the feedback was positive or constructive. Of the individuals questioned, no one stated that the DST had any negative impact, and the project was considered a success. Another example, Project K, demonstrates similar results with different DST output values.

4.7 Project K Case Study Overview

The second Case Study to be analyzed is Project K. The goal of this project is to develop a new release of an existing software that is to be installed on a Navy aircraft. All requirements and initial code sets had already been developed and submitted to a Naval office at the Pentagon, but the initial TDP package was rejected and the program was directed to re-do everything and create a new release that fixes the errors identified.

74

The first step that occurred for DST implementation was to determine if this project was applicable to the study. The following 6 characteristics were collected and gathered:

1. Program Name: Program A

2. Project Name: Project K

3. Number of Function Points: 376

4. Project Stage: Technology Maturation

5. Type of Project: Systems Engineering

6. Planned Schedule: 112 Days.

The program name and project name were not relevant; however, program name has been modified to “Program A” and the project name has been modified to “Project

K” for obscurity reasons. The number of function points had been predefined in the prior solutions analysis phase of the project. Because the number of function points were between 0 and 1000, this met the function point qualifications for this study. The project was also transitioning into the Technology Maturation phase, so this met the project stage qualifications for this study. The project type qualifications for this study were met because this was a systems engineering project. Finally, the planned schedule was also within scope of this study because 112 Days fit the study window of May 1st, 2020 through September 1st, 2020. This project completed the material solutions analysis phase and was ready to start the technology maturation phase on May 25th, 2020. Since this project met all qualifications for this study, approval was received from the project manager to begin DST execution.

75

The actual DST code was not yet complete on May 25th, so the logic flows were submitted for implementation instead of the Java software program. This, however, was not an issue because the DST software was programmed based on the DST logic flows, so by implementing the logic flows, the project was executing the DST. In order to implement the logic flows, the following 9 input variables were gathered:

1. Number of Function Points: 376

2. Percent of New Innovations: 24

3. Percent of Preexisting Requirements: 100

4. Project Management Style: Agile

5. Percent of New or Untrained Staff: 60

6. Model Layers of Abstraction: 3

7. Preexisting Architecture: None

8. DoDAF or MoDAF Preferred: DoDAF

9. Interoperability Requirements: None

Identification of the 9 variables above was critical for executing the DST process flows. Based on the input variables, the first step for following the process flows was method calculation. The first calculation for method was X = X+(24/4) = 6. This formula was utilized because a preexisting architecture did not exist, and the project manager estimated the percent of new innovations to be 24. The next calculation was X = X+(100-

100)/4 = 6 + 0 = 6, which was based on the percent of preexisting requirements. The next calculation was X = X + 25 = 6 + 25 = 31 based on the project management style being agile. The last calculation was X = X+(100-60)/4 = 31 + 10 = 41 based on the percent of

76 new or untrained staff. Since the final X variable was 41, the output method selection for the DST was Structured Analysis and Design.

Because the previously calculated X value was less than 60, a final Y value was hard-set to Y = 1. Since there were with no interoperability requirements to override the selection, the final output framework would have been either DoDAF or MoDAF.

Because the project selected that DoDAF was preferred, then the final DST output framework was DoDAF.

Language selection begins with the previously calculated X value, which was 41, so the Z value was thus hard-set to 26 as a starting point. The next calculation considered the percent of untrained staff, which was 60 percent. Thus Z = Z + [49-(60*0.49)] = 26 +

19.6 = 45.6. With a final Z calculation of 45.6, and no predefined interoperability requirements, the DST selected language output was IDEF.

To calculate the tool output, the DST process started with the previously calculated language (Z) value. Because the Z value was between 46 and 50, the DST process hard-set the W value to 76. With a final W calculation of 76, and no predefined interoperability requirements, the DST selected tool output was IBM.

Based on all calculations the final DST output variables were as follows: SAD,

DoDAF, IDEF, and IBM. This was a drastic change from normal operations because if the project were to follow the original NAVY mandate, variables used would have been:

OOSEM, UAF, SysML, NoMagic. By giving the project flexibility to implement the

DST, all 4 identified variables had changed. Once the DST output variables were defined,

77 the next step was for the project was to implement the DST based on the previously defined output variables.

4.8 Project K Case Study Execution

The technology maturation phase of the project was initialized with the creation of a high-level node tree for describing the projects functional hierarchy. This node tree was created in accordance with the SAD method and the Operational Activity

Decomposition Tree DoDAF framework using the IDEF language in the IBM toolset.

The decomposition tree visualizes the top-most layer that outlines the high-level activities for the project. This decomposition satisfied the SAD methodology and the

IDEF language. For reference, “process mapping is an indispensable tool in documenting and understanding a system before analyzing and redesigning it, and it requires a graphical representation. IDEF provides a standard format that satisfies this need” (Drake et al., 1998, p. 211). As such, each process is mapped, and the lines shown in Figure 18 that connect the processes represent an aggregation association. Additionally, the processes on the decomposition tree could be double-clicked in the IBM tool and a lower layer sequence of processes could be displayed.

After all 50 operational context views were mapped out, the next step in the SAD process was to develop resource flows that support the different mission scenarios. Such flows helped to demonstrate the behavior of the system. The behavioral diagrams for

Project K were developed in accordance with the DST, exercising the event-trace

DoDAF framework in the IDEF language, using the IBM toolset. Under this construct, the diamonds represented decision nodes, and the boxes represented activities. The

78 horizontal lines represented separations between swim-lanes that were assigned to the functions outlined in the operational context.

After the structure and behavior were defined, the functions were mapped to the requirements using an exchange requirements matrix. This matrix correlated the mission task, event trigger, sending node, receiving node, data format, classification, assigned system, and assigned requirement. Completing the requirements matrix was the last step prior to packaging everything up in TDP, which was passed over to the engineering manufacturing and development phase. Handing over the TDP concluded the systems engineering effort for the technology maturation phase on Project K.

4.9 Project K Case Study Results

Once the Technology Maturation stage was finished, data were collected, and project staff was interviewed. The following results highlight the resulting data collected:

1. Function Points at Complete: 376

2. Project Stage: Technology Maturation

3. Type of Project: Systems Engineering

4. Schedule at Complete: 112 Days

Unlike the C2 Sensor example project, the data at completion for Project K did not demonstrate a change from the initial variables. The number of function points, for example, started at 376 and ended 376 on completion. This is likely due to the fact that the project was firm-fixed price (additional requirements were not allowed) and an existing system was already in place. The results indicate that that all function points were already mapped out prior to entering the technology maturation phase, which can be

79 expected for these types of projects. In this sense, the architectural benefit was not realized through missing functions, but rather based upon portfolio level acceptance. As mentioned in the overview, a previous TDP was submitted to a Naval office at the

Pentagon, but it was rejected and the program was directed to re-do everything. The reason the project was not allowed to transition to the next phase was because all the architectural views were done in Microsoft PowerPoint, and they failed to demonstrate that all requirements had been functionally allocated. Ultimately, the lack of a comprehensive architecture was one of the reasons why the project failed initial TDP submission. As of September 4th, 2020, the second TDP submission attempt was accepted and the project received approval to move to the engineering and manufacturing phase. This proves that the DST’s implementation was successful because without it, here was a high chance that the project would have failed the second portfolio level review.

Regarding project schedule, there was no schedule slip. Following the formula used for this study to measure schedule slip, it had been determined that the final schedule performance for this project was (112/ 112) – 1 = 0 percent slippage. This data value was then documented as part of the overall study. Although the project did not complete ahead of schedule, because there was a 0 percent schedule slip, this still shows improvement from the original average of 25 percent slippage without the use of the

DST. It can be concluded that this second case highlights the success of DST implementation, which was further proven during the Mann-Whitney and Kruskal-Wallis analysis of all 17 projects.

Project staff was interviewed and the lead systems engineer, Keith Robinson of NAVAIR stated that “Without MBSE [and DST], the project would have been dead on arrival. The

80 previous submission attempt was rejected because we didn’t have an architecture.” The

Project Lead did not comment about MBSE but did mention that the DST helped achieve leadership support. Additionally, the subject matter experts who were utilized to review the models provided mostly positive feedback. The only criticism received could be classified as adaptation resistance because it was directed toward MBSE in general rather than the DST specifically. Unlike the C2 Sensor example, there were very few challenges with nomenclature, which was most likely due to the IDEF language over SysML.

Overall, the feedback was positive or constructive. Of the individuals questioned, no one stated that the DST had any negative impact, and the project was considered a success.

With the research complete, the original questions entering the study could be answered.

4.10 Conclusion

After the results had been gathered, various research attributes were identified and examined. The first of which involved the main factors that caused excess median schedule delays with MBSE projects when compared to DBSE in the technology maturation phase. With H1 and H2 validated, it is safe to assume that the main factors that caused schedule delays revolved around the improper application of MBSE methods.

By allowing projects the ability to tailor the various MBSE pillars (method, framework, language, and tool) the median schedule delay was reduced from 25 percent to 0 percent.

As examined in the first case study, it is also important to control architectural reviews.

Since a large portion of MBSE involves subject matter expert reviews, it is important to monitor and control model feedback. A responsibility assignment matrix might be a useful tool to aid review completion and should be examined in future studies. As examined in the second case study, the application of the DST caused the project to finish

81 on time with an architectural product set that was sufficient for TDP approval and the project was able to progress to the engineering and manufacturing phase. Overall, the validation of H1 and H2 proves that the DST was successful; however, there are many areas for improvement and more research should be performed to allow for a broader applicability, which will be discussed in Chapter 5.

82

Chapter 5—Discussion and Conclusions

5.1 Discussion

Model-Based Systems Engineering (MBSE) is a complex systems engineering approach that utilizes digital models in a data-rich environment, having a unified coherent model as the primary artifact. In MBSE, classical Document-Based Systems

Engineering (DBSE) products are still utilized but these products are not manually created, rather they are generated as an output from the underlying model. Although

DBSE has its advantages, MBSE seems to be the future direction for systems engineering projects. MBSE allows for a single source of truth that documents the entire system in a human readable format. Additionally, the model can be interfaced with other external applications, the modeling process identifies analysis and requirement gaps, and the model repository documents change and configuration management. MBSE had enough advantages that in the Navy, all systems engineering projects are required to utilize

MBSE. In the Army, the Natick Soldier RD&E Center (NSRDEC) is utilizing MBSE in the form of Systemigrams and SysML (Cloutier et al., 2015). In the Air Force, the Space and Missile Systems Center Wideband Global SATCOM program is utilizing MBSE to speed delivery of satellites (Leveraging Model-Based Systems Engineering, 2020). Even the European Space Agency is utilizing MBSE and has found many advantages

(Gebreyohannes, Edmonson, & Esterline, 2017). It is clear that many organizations outside the Navy, and perhaps worldwide, are slowly adopting MBSE as a preferred systems engineering approach. In light of this observation, MBSE should be properly defined, compartmentalized, and implemented in a manner that provides enough flexibility and guidance so that adaptation friction is reduced. If MBSE is not properly

83 applied, then projects could incur negative performance, as observed in this research.

However, if MBSE is properly applied, then projects could inherit all the advantages of

MBSE without a negative performance impact. This study attempted to lay the groundwork for MBSE compartmentalization and tailorable implementation utilizing a small sample of projects within the Navy. Hopefully, future studies can examine the lessons learned and improve upon the body of knowledge assisting other projects on a broader scale.

5.2 Conclusions

Overall, this study was a success, as the validation of H1 and H2 demonstrated that allowing projects the ability to tailor the various MBSE pillars (method, framework, language, and tool) through the use of a DST can successfully improve median schedule delay. Through this research, multiple outcomes were observed. Firstly, the factors that caused outlier MBSE schedule performance in the technology maturation phase is most likely due to constructive and destructive output variable alignment. As mentioned previously, if the DST output selection were static, then a certain set of projects would randomly fall into alignment with the unchanging method, framework, language, and tool. However, many projects would also fall out of alignment and experience a significant schedule slip. This is partially confirmed through the final results, which demonstrated equal variance between MBSE and DBSE after DST implementation.

Confirmation of H1 demonstrated that there was a need for a specially designed decision support tool. The study had confirmed that in the Navy, systems engineering projects implementing MBSE with less than 1000 function points fall behind schedule in the technology maturation phase when compared to similar projects using classical

84

DBSE. This confirmation validated the need for this study and laid the groundwork for

DST attribute identification and approval from the program office to proceed with this research.

Confirmation of H2 demonstrated that the components of MBSE useful for the purpose of creating a DST are aligned with the associated pillars. Altering method, framework, language, and tool based on various input factors had proven to have a successful impact in reducing the schedule slip on projects in the technology maturation phase. Another pillar, people, was not heavily considered during DST development and should most likely be explored in future studies.

Ultimately, the advantages of implementing a specifically designed DST are clear.

The 17 projects identified experienced a NET-positive impact by implementing the DST when compared to the Navy’s current method of MBSE implementation. Schedule slip had decreased from 0.25 percent to 0 percent on average. However, there are certainly other items and other improvements that need to be explored in future studies which will be discussed in Section 5.4.

5.3 Contributions to Body of Knowledge

Based on the results of this study and the research outlined in the literature review, three contributions to the body of knowledge can be listed:

1. MBSE can be compartmentalized into 5 basic pillars: Language, Method,

Tool, Framework, and People. Nowhere in the existing body of knowledge

does it isolate all five pillars and clearly define what each pillar represents.

85

2. There is a correct and incorrect technique of MBSE application. Adjusting the

MBSE method, framework, language, and tool based on various input

conditions can impact schedule performance.

3. Number of System Functions can be a useful method of determining size for

MBSE Projects. Function point analysis is commonly performed in Software

Engineering, but it was difficult to find any reference in relation to MBSE.

5.4 Recommendations for Future Research

Based on the results of this study and the research outlined in the literature review, seven recommendations can be made regarding future research:

1. Manipulate the input and output variables. For example, only the IDEF, UML,

SysML, and OPM languages were examined in this study. Other languages

such as Architecture Description Language (ADL) were not included as part

of this study.

2. Analyze for other performance factors such as cost or quality. This study was

limited to schedule performance only, perhaps there was a cost or quality

impact that needs to be further explored.

3. Increase the sample size. This study was performed on three sets of 17

projects for statistical significance. A larger data set could be beneficial in

further validating applicability as well as provide additional data that could be

used to refine the DST.

4. Apply the DST to other phases of the project lifecycle. This study was limited

to only the technology maturation phase, so a DST could be developed for

other phases such as Engineering and Manufacturing.

86

5. Apply the DST to larger projects. Projects with function points greater than

1000 were not analyzed in this study. It is not clear if a DST would be useful

for larger scale projects.

6. Apply the DST to other organizations. This study was limited to systems

engineering projects in the Navy, and it is unclear if similar results would be

achieved in other organizations.

7. Include additional human factors. Use of the people pillar was lean in this

study because the projects selected were not able to adjust staffing. Human

factors such as automaticity, memory, training, fatigue, and other related

aspects were not considered and could be included in future studies.

87

References

About function point analysis. (2020). International Function Point Users Group.

Retrieved July 14, 2020, from https://www.ifpug.org/about-function-point-

analysis/

Albrecht, A. J. (1979). Measuring application development productivity. In Joint Share,

Guide, and IBM Application Development Symposium, 1979.

Cameron, B., & Adsit, D. M. (2018). Model-based systems engineering uptake in

engineering practice. IEEE Transactions on Engineering Management, 1-11.

Clark, J. O. (2009). A theory of information quality and its implementation in systems

engineering. IEEE. https://ieeexplore-ieee-

org.proxygw.wrlc.org/stamp/stamp.jsp?tp=&arnumber=4815831

Cloutier, R., Sauser, B., Bone, M., & Taylor, A. (2015). Transitioning systems thinking to

Model-Based Systems Engineering: Systemigrams to SysML models. IEEE

Transactions on Systems, Man, and Cybernetics: Systems, 45(4), 662-674.

https://doi.org/10.1109/TSMC.2014.2379657

Cohen, S., & Soffer, A. (2007). Scrutinizing UML and OPM modeling capabilities with

respect to systems engineering. In Proceedings of the 2007 International

Conference on Systems Engineering and Modeling 1-4244-0771-0/07/ (pp. 93-

101). IEEE. https://ieeexplore-ieee-org.proxygw.wrlc.org/document/4243723

Delligatti, L. (2014). SysML distilled: A brief guide to the systems modeling language.

Pearson Education.

88

Dickerson, C. E., & Mavris, D. (Dec 2013). A brief history of models and model based

systems engineering and the case for relational orientation. IEEE Systems Journal,

7(4).

DoD Instruction 5000.02 Operation of the Adaptive Acquisition Framework [DoD

Instruction]. (2020, January 3). http://acqnotes.com/wp-

content/uploads/2014/09/DoD-Instruction-5000.2-Operat ion-of-the-Adaptive-

Acquisition-Framework-23-Jan-2020.pdf

Dori, D., Perelman, V., Shlezinger, G., & Reinhartz-Berger, I. (2005). Pattern-Based

Design Recovery from Object-Oriented Languages to Object Process

Methodology. In IEEE International Conference on Software - Science,

Technology & Engineering (SwSTE'05). IEEE.

https://doi.org/10.1109/SWSTE.2005.16

Drake, P. R., John, E. G., Petheram, C. L., Krabbe, P., & Ooi, S. N. (1998). Slipping

systems into SMEs. Manufacturing Engineer, 77(5), 217-220.

https://doi.org/10.1049/me:19980506

Eclipse Papyrus RCP (Version 2020-06 release (4.8.0)) [Computer software]. (2020).

Eclipse Foundation. https://www.eclipse.org/papyrus/download.html

Enterprise Architect 15. (2020). Sparx Systems. Retrieved June 25, 2020, from

https://sparxsystems.com/products/ea/shop/

Fisher, G. H. (1998). Model-based systems engineering of automotive systems. 17th

DASC. AIAA/IEEE/SAE. Digital Avionics Systems Conference. Proceedings (Cat.

No.98CH36267), Bellevue, WA, USA, 1, B15/1-B15/7.

https://doi.org/10.1109/DASC.1998.741455

89

Furey, S. (1997). Why we should use function points [software metrics]. IEEE Software,

14(2), 28-29. https://doi.org/10.1109/52.582971

Gao, S., Cao, W., Fan, L., & Liu,, J. (2019). MBSE for satellite communication system

architecting. IEEE Access, 7, 164051-164067.

https://doi.org/10.1109/ACCESS.2019.2952889

Gebreyohannes, S., Edmonson, W., & Esterline, A. (2018). Formal behavioral

requirements management. IEEE Systems Journal, 12(3), 3006-3017.

https://doi.org/10.1109/JSYST.2017.2775740

Grobshtein, Y., & Dori, D. (2009). Creating SysML views from an OPM model. 2009

International Conference on Model-Based Systems Engineering, Haifa, Israel, 29

May 2009. https://doi.org/10.1109/MBSE.2009.5031718

Hardwick, M., & Spooner, D. L. (1989). The ROSE data manager: using object

technology to support interactive engineering applications. in IEEE Transactions

on Knowledge and Data Engineering,, 1(2), 285-289.

https://doi.org/10.1109/69.87967

Hart, L. E. (2015, July 30). Introduction to Model-Based System Engineering (MBSE)

and SysML. Lockheed Martin. https://www.incose.org/docs/default-

source/delaware-valley/mbse-overview-incose-30-july-2015.pdf

Hause, M. (2010). The Unified Profile for DoDAF/MODAF (UPDM) enabling systems

of systems on many levels. 2010 IEEE International Systems Conference, San

Diego, CA, USA, 426-431. https://doi.org/10.1109/SYSTEMS.2010.5482450

Hause, M., Thom, F., & Moore, A. (2005). Inside SysML. Computing & Control

Engineering Journal, 16(4), 10-15. https://doi.org/10.1049/cce:20050402

90

Holt, J. H., Perry, S., Payne, R., Bryans, J., Hallerstede, S., & Hansen, F. O. (2014). A

model-based approach for for systems of systems. IEEE

Systems Journal, 9(1), 252-262. https://doi.org.10.1109/JSYST.2014.2312051

Huang, H., Peng and Z. Feng, R., & Feng, Z. (2018). Efficient and exact query of large

process model repositories in cloud workflow systems. IEEE Transactions on

Services Computing, 11(5), 821-832. https://doi.org/10.1109/TSC.2015.2481409

ITZ-LLC products. (2020). ITZ-LLC. Retrieved June 24, 2020, from https://itz.llc/

Kalawsky, R. S., O'Brien, J., Chong, S., Wong, C., Jia, H., Pan, H., & Moore, P. R.

(2013). Bridging the gaps in a Model-Based System Engineering workflow by

encompassing hardware-in-the-loop simulation. IEEE Systems Journal, 7(4), 593-

605. https://doi.org/10.1109/JSYST.2012.2230995

Krikorian, H. F. (2003). Introductions to object-oriented systems engineering. IT Pro,

(March/April), 38-42.

Lavazza, L., & Garavaglia, C. (2009). Using function points to measure and estimate real-

time and embedded software: Experiences and guidelines. 3rd International

Symposium on Empirical Software Engineering and Measurement, Lake Buena

Vista, FL, 100-110. https://doi.org/10.1109/ESEM.2009.5316018

Leveraging Model-Based Systems Engineering to speed delivery to the warfighter. (2020,

March 12). Los Angeles Air Force Base. Retrieved September 21, 2020, from

https://www.losangeles.af.mil/News/Article-Display/Article/2110385/wgs-11-

leveraging-model-based-systems-engineering-to-speed-delivery-to-the-warf/

Li, L., Soskin, N. L., Jbara, A., Karpel, M., & Dori, D. (2019). Model-based systems

engineering for aircraft design with dynamic landing constraints using object-

91

process methodology. IEEE Access. Retrieved September 10, 2019, from

https://doi-org.proxygw.wrlc.org/DOI: 10.1109/ACCESS.2019.2915917

Lightsey, B. (2001, January 1). Systems engineering fundamentals. Defense Acquisition

University. https://apps.dtic.mil/dtic/tr/fulltext/u2/a387507.pdf

Liu, B., & Wu, X. (2011). Mission reliability analysis of missile defense system based on

DODAF and Bayesian networks. 2011 International Conference on Quality,

Reliability, Risk, Maintenance, and Safety Engineering, Xi'an, China, 777-780.

Https://doi.org/10.1109/ICQR2MSE.2011.5976725

Logical Data model - IDEF1X. (2020). Sparx Systems. Retrieved June 25, 2020, from

https://sparxsystems.com/resources/gallery/diagrams/software/sw-

logical_data_model-idef1x.html

Lu, J., Wang, G., & Törngren, M. (2020). Design ontology in a case study for

cosimulation in a Model-Based Systems Engineering tool-chain. IEEE Systems

Journal, 14(1), 1297-1308. htpps://doi.org/10.1109/JSYST.2019.2911418

Mazeika, D., Morkevicius, A., & Aleksandraviciene, A. (2016). MBSE driven approach

for defining problem domain. "MBSE driven approach for defining problem

domain," 2016 11th System of Systems Engineering Conference (SoSE),

Kongsberg, 1-6. https"//doi.org:10.1109/SYSOSE.2016.7542911

Nuss, A. J., Blackburn, T. D., & Garstenauer, A. (2018). Toward resilience as a tradable

parameter during conceptual trade studies. IEEE Systems Journal, 12(4), 3393-

3403. https://doi.org/10.1109/JSYST.2017.2703608

92

Ramos, A. L., Ferreira, J. V., & Barcelo, J. (2012). Model-based systems engineering: An

emerging approach for modern systems. IEEE Transactions on Systems,

Management, and Cybernetics, 42(1), 101-111.

Rickman, D. M. (2001). A process for combining object oriented and structured analysis

and design. 20th DASC. 20th Digital Avionics Systems Conference (Cat.

No.01CH37219), Daytona Beach, FL, USA, 1, 4E4/1-4E4/6.

https://doi.org/10.1109/DASC.2001.963382

Sabir, U., Azam, F., Haq, S. U., Anwar, M. W., Butt, W. H., & Amjad, A. (2019). A

model driven reverse engineering framework for generating high level UML

models from Java Source Code. IEEE Access, 7, 158931-158950.

Savary-Leblanc, M. (2019). Improving MBSE tools UX with AI-Empowered Software

Assistants. 2019 ACM/IEEE 22nd International Conference on Model Driven

Engineering Languages and Systems Companion (MODELS-C), Munich,

Germany, 648-652. https://doi.org/10.1109/MODELS-C.2019.00099

Selic, B. (2003). UML 2: A model-driven development tool. IBM Systems Journal, 45(3),

607-620. https://doi.org/10.1147/sj.453.0607

Sitton, M., & Reich, Y. (2019). ESE framework verification by MBSE. IEEE Systems

Journal, 13(3), 2108-2117. https://doi.org/10.1109/JSYST.2018.2877667

Sturm, A., Dori, D., & Shehory, O. (2010). An Object-Process-Based Modeling Language

for multiagent systems. IEEE Transactions on Systems, Man, and Cybernetics,

Part C (Applications and Reviews), 40(2), 227-241.

https://doi.org/10.1109/TSMCC.2009.2037133

93

Torre, D., Labiche, Y., Genero, M., Baldassarre, M. T., & Elaasar, M. (2018). UML

diagram synthesis techniques: A systematic mapping study. 2018 IEEE/ACM 10th

International Workshop on Modelling in Software Engineering (MiSE),

Gothenburg, Sweden, 2018, 33-40. https://ieeexplore-ieee-

org.proxygw.wrlc.org/document/8445456

Vanderperren, Y., & Dehaene, W. (2005). UML 2 and SysML: an approach to deal with

complexity in SoC/NoC design. Design, Automation and Test in Europe, Munich,

Germany, 2, 716-717. https://doi.org/10.1109/DATE.2005.319

Weiss, E., Chung, L., & Nguyen, L. (2019). A MBSE approach to satellite clock time and

frequency adjustment in highly elliptical orbit. MILCOM 2019 - 2019 IEEE

Military Communications Conference (MILCOM), Norfolk, VA, USA, 65-69.

https://doi.org/10.1109/MILCOM47813.2019.9020875

What is a function point? (2020). IFPUG International Function Point Users. Retrieved

July 19, 2020, from https://www.ifpug.org/faqs-2/#Eleven

Whittle, J., Hutchinson, J., Rouncefield, M., Burden, H., & Heldal, R. (2013). Industrial

adoption of Model-Driven Engineering: Are the tools really the problem? In:

Moreira A., Schätz B., Gray J., Vallecillo A., Clarke P. (eds) Model-Driven

Engineering Languages and Systems. MODELS 2013. Lecture Notes in Computer

Science, 8107, 1-17. https://doi.org/10.1007/978-3-642-41533-3_1

Zakarian, A., & Kusiak, A. (2000). Analysis of process models. IEEE Transactions on

Electronics Packaging Manufacturing, 23(2), 137-147.

Zhang, X., Luo, A., Mao, Y., Lin, M., Kou, Y., & Liu, J. (2020). A Semi-automatic

Optimization Design Method for SvcV-5 in DoDAF 2.0 based on service

94

identification. IEEE Access, 8, 33442-33460.

https://doi.org/10.1109/ACCESS.2020.2970446

Zhou, K., & Rong, G. (2010). Study of supply chain monitoring system based on IDEF

method. 2010 International Conference on Logistics Systems, and Intelligent

Management (ICLSIM), Harbin, China, 06 May 2010, 278-281.

https://doi.org/10.1109/ICLSIM.2010.5461420

Zhou, K., & Rong, G. (2011). Application of IDEF method in hardware-in-the-loop

process simulation. MSIE 2011, Harbin, China, 04 February 2011, 1198-1200.

https://doi.org/10.1109/MSIE.2011.5707635

95

Appendix A

Appendix A is a permission letter authorizing the use of NON-FOUO data in the praxis. This memo also authorizes the DST to be applied to a list of projects. This is important because this letter was used to convince project leads to implement the DST while demonstrating that there was leadership backing this study.

FOUO Permission Letter

Note. PEO(U&W) Office, Patrick Buckley & Paul Weinstein, 2019.

96

Appendix B

Appendix B is the raw source code that was used to develop the DST. Source Code was developed in Java and requires a Java Runtime Environment (JRE) of at least 1.8.0_241. Projects can utilize the code by copying and pasting it in a complier (such ss eclipse). This software is not necessary to implement the DST since it is based on the DST process flows. Rather, the software is an implementation aid for individuals that find the UML process flows challenging to understand or projects that desire rapid DST result generation.

DST Source Code: import javax.swing.JOptionPane; public class DST{ public static String x = "null"; //recommended method public static String y = "null"; //recommended framework public static String w = "null"; //recommended language public static String z = "null"; //recommended tool public static double xValue = 0; //value of X public static double yValue = 0; //value of y public static double zValue = 0; //value of z public static double wValue = 0; //value of w public static int a = 0; //percent of New innovations public static int b = 0; //percent of pre-existing requirements public static int c = 0; //percent of New/Untrained staff public static int d = 0; //estimated layers of Abstraction public static String e = "null"; //choosing DoDAF or MoDAF //this is the code for the Method public static String method (double functPoints, String preExistArch, String projectManagementStyle, String interoperabilityReq) {

if (functPoints <= 1000) {

if (preExistArch == "TRUE") { b = Integer.parseInt(JOptionPane.showInputDialog("Please input Percent of Preexisting Requirements")); xValue = (100-b)/ 4;

if (projectManagementStyle == "AGILE") { xValue = xValue + 25;

97

c = Integer.parseInt(JOptionPane.showInputDialog("Please input Percent of New / Untrained staff")); xValue = xValue + ((100-c)/4); }// end agile if

if (projectManagementStyle == "WATERFALL") { c = Integer.parseInt(JOptionPane.showInputDialog("Please input Percent of New / Untrained staff")); xValue = xValue + ((100-c)/4); }// end waterfall if

if (projectManagementStyle == "HYBRID") { xValue = xValue + 12.5; c = Integer.parseInt(JOptionPane.showInputDialog("Please input Percent of New / Untrained staff")); xValue = xValue + ((100-c)/4); }// end hybrid if

}// end true preExistArch if if (preExistArch == "FALSE") { a = Integer.parseInt(JOptionPane.showInputDialog("Please input percent of new innovations")); xValue = xValue + (a / 4);

if (projectManagementStyle == "AGILE") { xValue = xValue + 25; c = Integer.parseInt(JOptionPane.showInputDialog("Please input Percent of New / Untrained staff")); xValue = xValue + ((100-c)/4); }// end agile if

if (projectManagementStyle == "WATERFALL") { c = Integer.parseInt(JOptionPane.showInputDialog("Please input Percent of New / Untrained staff")); xValue = xValue + ((100-c)/4); }// end waterfall if

if (projectManagementStyle == "HYBRID") { xValue = xValue + 12.5;

98

c = Integer.parseInt(JOptionPane.showInputDialog("Please input Percent of New / Untrained staff")); xValue = xValue + ((100-c)/4); }// end hybrid if }// end false preExistArch if //end funcPoints if switch(interoperabilityReq) { case "ROSE" : xValue = 1; break; case "OPM" : xValue = 21; break; case "SA&D" : xValue = 41; break; case "Vitech" : xValue = 61; break; case "OOSEM" : xValue = 81; break; default : }// end switch statement if (xValue >= 81) { x = "OOSEM"; }// end OOSEMM if if (xValue >= 61 && xValue <= 80) { x = "Vitech"; }// end Rose if if (xValue >= 41 && xValue <= 60) { x = "SA&D"; }// end structured if if (xValue >=21 && xValue <= 40) { x = "OPM"; }// end Vtech if if (xValue <= 20) { x = "ROSE"; }// end OMP if }// end functPoints if else { JOptionPane.showMessageDialog(null, "You have entered an incorrect number of function points"); }

99

System.out.println("reccommended x: " + x + " value: " + xValue); //testing purposes return x; }// end method method //this is the code for the Framework public static String framework (double functPoints, String interoperabilityReq) { if (functPoints <= 1000) { if (xValue >= 61) { d = Integer.parseInt(JOptionPane.showInputDialog("Please input estimated layers of abstraction"));

if (d >= 4 && d <= 5) { yValue = yValue + 25; } if (d >= 6) { yValue = yValue + 50; } a = Integer.parseInt(JOptionPane.showInputDialog("Please input percent of new innovations")); yValue = yValue + (a/2); }//end xValue > 61 if

if (xValue >= 0 && xValue <= 60 ) { yValue = 1; } switch(interoperabilityReq) { case "DODAF/MODAF" : yValue = 1; break; case "UPDM" : yValue = 34; break; case "UAF" : yValue = 99; break; default : }// end switch loop

if (yValue >= 67 && yValue <= 100) { y = "UAF"; } if (yValue >= 34 && yValue <= 66) { y = "UPDM"; }

100

if (yValue >= 0 && yValue <= 33) { e = JOptionPane.showInputDialog("Please choose DoDAF or MoDAF"); if (e.equals("DoDAF")) { y = "DoDAF"; }// end MoDAF if if (e.equals("MoDAF")) { y = "MoDAF"; }// end DoDAF if } }//end functPoints if else { JOptionPane.showMessageDialog(null, "You have entered an incorrect number of function points"); } System.out.println("reccommended y: " + y + " value: " + yValue); //testing purposes return y; }// end framework method public static String language (double functPoints, String interoperabilityReq) {

if (functPoints <= 1000) { if (xValue >= 41) { /* * d = Integer.parseInt(JOptionPane. * showInputDialog("Please input estimated layers of abstraction")); * * * if (d >= 4 && d <=5) { zValue = zValue + 25; }//end 4= 6) { * zValue = zValue + 50; }// end d > 6 if */ if (xValue >= 41 && xValue <= 60) { zValue = 26; }// end 41= 61 && xValue <=80) { zValue = 51; }//end 61= 81 && xValue <=100) { zValue = 51; }// end 81

101

//zValue = zValue + (c/2); /* * switch(interoperabilityReq) { case "UML" : zValue = 51; break; case "SysML" : * zValue = 76; break; default : * * }// end switch loop */ } //end xValue > 41 if if (xValue >= 21 && xValue <= 40) { zValue = 1; }// end 21

if (xValue >= 0 && xValue <= 20) { zValue = 26; }// end 0= 0 && zValue <= 25) { z = "OPM"; } if (zValue >= 26 && zValue <= 50) { z = "IDEF"; } if (zValue >= 51 && zValue <= 75) { z = "UML"; } if (zValue >= 76 && zValue <= 100) { z = "SysML"; } }// end functPoints if else {

102

JOptionPane.showMessageDialog(null, "You have entered an incorrect number of function points"); }// end else statement System.out.println("reccommended z: " + z + " value: " + zValue); //testing purposes return z; }// end language method public static String tool (Double functPoints, String interoperabilityReq) { if (functPoints <= 1000) { if (zValue >= 51) { d = Integer.parseInt(JOptionPane.showInputDialog("Please input estimated layers of abstraction"));

if (d >= 4 && d <= 5) { wValue = wValue + 25; } if (d >= 6) { wValue = wValue + 50; } a = Integer.parseInt(JOptionPane.showInputDialog("Please input percent of new innovations")); wValue = wValue + (a/2); c = Integer.parseInt(JOptionPane.showInputDialog("Please input Percent of New / Untrained staff")); wValue = wValue - c; if (wValue < 0) { wValue = 0; } }// end if wValue > 51 if statement if (zValue >= 26 && zValue <= 50) { wValue = 76; } // end if wValue between 26 and 50 if (zValue >= 0 && zValue <= 25) { wValue = 1; } // end if wValue between 0 and 25 switch(interoperabilityReq) { case "Papyrus" : wValue = 1; break; case "Sparx EA" : wValue = 26; break; case "NoMagic" :

103

wValue = 51; break; case "IBM" : wValue = 76; break; default : }// end switch statement if (wValue >= 0 && wValue <=25) { w = "Papyrus"; } if (wValue >= 26 && wValue <=50) { w = "Sparx"; } if (wValue >= 51 && wValue <=75) { w = "NoMagic"; } if (wValue >= 76 && wValue <=100) { w = "IBM"; } }// end functPoints if else { JOptionPane.showMessageDialog(null, "You have entered an incorrect number of function points"); } System.out.println("reccommended w: " + w + " value: " + wValue); //testing purposes return w; }// end tool method }// end method class

104

Appendix C

Appendix C Shows an example screenshot if the code in Appendix B were to be compiled and executed. Dropdown menus are utilized for various input variables such as project management style or interoperability requirements. The remaining input variables will occur though “pop-up” message boxes such as the percent of preexisting requirements shown in the example.

DST Screenshot

Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2020.

105

Appendix D

Appendix D is the project data used to validate H1 and H2. The data has been sanitized and parsed into 3 main sections: DBSE without DST, MBSE Without DST, and

MBSE with DST. Schedule deviations were normalized using the formula ([Ta / Tp] – 1) and rounded to the nearest percent.

Project Data

DBSE Without DST Record # Program Name Project Name Project Size Project Stage MBSE Usage SE Method DST Usage Schedule Deviation 1 Program A Small Project 1 0-1000 Technology Maturation No MBSE used DBSE No DST used 0% 2 Program A Small Project 2 0-1000 Technology Maturation No MBSE used DBSE No DST used 0% 3 Program A Small Project 3 0-1000 Technology Maturation No MBSE used DBSE No DST used 0% 19 Program B Small Project 1 0-1000 Technology Maturation No MBSE used DBSE No DST used 50% 20 Program B Small Project 2 0-1000 Technology Maturation No MBSE used DBSE No DST used 0% 21 Program B Small Project 3 0-1000 Technology Maturation No MBSE used DBSE No DST used 0% 22 Program B Small Project 4 0-1000 Technology Maturation No MBSE used DBSE No DST used 13% 23 Program B Small Project 5 0-1000 Technology Maturation No MBSE used DBSE No DST used 0% 24 Program B Small Project 6 0-1000 Technology Maturation No MBSE used DBSE No DST used 0% 25 Program B Small Project 7 0-1000 Technology Maturation No MBSE used DBSE No DST used 0% 45 Program C Small Project 3 0-1000 Technology Maturation No MBSE used DBSE No DST used 0% 46 Program C Small Project 4 0-1000 Technology Maturation No MBSE used DBSE No DST used -15% 47 Program C Small Project 5 0-1000 Technology Maturation No MBSE used DBSE No DST used 62% 48 Program C Small Project 6 0-1000 Technology Maturation No MBSE used DBSE No DST used 0% 49 Program C Small Project 7 0-1000 Technology Maturation No MBSE used DBSE No DST used 0% 50 Program C Small Project 8 0-1000 Technology Maturation No MBSE used DBSE No DST used 5% 51 Program C Small Project 9 0-1000 Technology Maturation No MBSE used DBSE No DST used 0%

MBSE Without DST Record # Program Name Project Name Project Size Project Stage MBSE Usage SE Method DST Usage Schedule Deviation 4 Program A Small Project 4 0-1000 Technology Maturation MBSE used OOSEM No DST used 0% 5 Program A Small Project 5 0-1000 Technology Maturation MBSE used OOSEM No DST used 215% 6 Program A Small Project 6 0-1000 Technology Maturation MBSE used OOSEM No DST used 38% 7 Program A Small Project 7 0-1000 Technology Maturation MBSE used OOSEM No DST used 0% 8 Program A Small Project 8 0-1000 Technology Maturation MBSE used OOSEM No DST used 0% 9 Program A Small Project 9 0-1000 Technology Maturation MBSE used OOSEM No DST used 0% 10 Program A Small Project 10 0-1000 Technology Maturation MBSE used OOSEM No DST used -21% 11 Program A Small Project 11 0-1000 Technology Maturation MBSE used OOSEM No DST used 12% 12 Program A Small Project 12 0-1000 Technology Maturation MBSE used OOSEM No DST used 47% 13 Program A Small Project 13 0-1000 Technology Maturation MBSE used OOSEM No DST used 61% 14 Program A Small Project 14 0-1000 Technology Maturation MBSE used OOSEM No DST used 90% 15 Program A Small Project 15 0-1000 Technology Maturation MBSE used OOSEM No DST used 25% 16 Program A Small Project 16 0-1000 Technology Maturation MBSE used OOSEM No DST used 60% 17 Program A Small Project 17 0-1000 Technology Maturation MBSE used OOSEM No DST used 90% 18 Program A Small Project 18 0-1000 Technology Maturation MBSE used OOSEM No DST used 0% 43 Program C Small Project 1 0-1000 Technology Maturation MBSE used OOSEM No DST used 43% 44 Program C Small Project 2 0-1000 Technology Maturation MBSE used OOSEM No DST used 0%

MBSE With DST Record # Program Name Project Name Project Size Project Stage MBSE Usage SE Method DST Usage Schedule Deviation 26 Program A Motion Imagery Capability 0-1000 Technology Maturation MBSE used Various DST used 8% 27 Program A Still Imagery Capability 0-1000 Technology Maturation MBSE used Various DST used 0% 28 Program A Project G 0-1000 Technology Maturation MBSE used Various DST used 33% 29 Program A Standard CDL Profile 0-1000 Technology Maturation MBSE used Various DST used -23% 30 Program A Bandwidth Efficient CDL 0-1000 Technology Maturation MBSE used Various DST used -12% 31 Program A Project H 0-1000 Technology Maturation MBSE used Various DST used 0% 32 Program A Sensor C2 Handover 0-1000 Technology Maturation MBSE used Various DST used 5% 33 Program A ICWG and CM 0-1000 Technology Maturation MBSE used Various DST used -33% 34 Program A Project I 0-1000 Technology Maturation MBSE used Various DST used 15% 35 Program A Tracks Schema 0-1000 Technology Maturation MBSE used Various DST used -6% 36 Program A Project K 0-1000 Technology Maturation MBSE used Various DST used 0% 37 Program A Project L 0-1000 Technology Maturation MBSE used Various DST used 25% 38 Program A Standard C2 Profile 0-1000 Technology Maturation MBSE used Various DST used 0% 39 Program A Advanced C2 Profile 0-1000 Technology Maturation MBSE used Various DST used 0% 40 Program A Advanced C2 Schema 0-1000 Technology Maturation MBSE used Various DST used 0% 41 Program A Past Project 1 0-1000 Technology Maturation MBSE used Various DST used 12% 42 Program A Past Project 2 0-1000 Technology Maturation MBSE used Various DST used 25%

Note. From Systems Engineering Database, by Z Jenkins, ITZ-LLC, 2020. 106