Applying the Cynefin Sense-Awareness Framework to Develop a Systems Engineering Method Diagnostic Assessment Model (SEMDAM)

by Russell L. Gilbertson

Bachelor of Electrical Engineering, June 1983, University of Minnesota Master of Business Administration, December 1985, Rensselaer Polytechnic Institute

A Dissertation submitted to

The Faculty of The School of Engineering and Applied Science of the George Washington University in partial fulfilment of the requirements for the degree of Doctor of Philosophy

January 19, 2018

Dissertation directed by

Bereket Tanju Professorial Lecturer in Engineering Management and Systems Engineering

and

Timothy Eveleigh Professorial Lecturer in Engineering Management and Systems Engineering

The School of Engineering and Applied Science of The George Washington University certifies that Russell L. Gilbertson has passed the Final Examination for the degree of

Doctor of Philosophy as of October 24, 2017. This is the final and approved form of the dissertation.

Applying the Cynefin Sense-Awareness Framework to Develop a Systems Engineering Method Diagnostic Assessment Model (SEMDAM)

Dissertation Research Committee

Bereket Tanju, Professorial Lecturer in Engineering Management and Systems Engineering, Dissertation Co-Director

Timothy Eveleigh, Professorial Lecturer in Engineering Management and Systems Engineering, Dissertation Co-Director

Shahram Sarkani, Professor of Engineering Management and Systems Engineering, Committee Member

Thomas Mazzuchi, Professor of Engineering Management and Systems Engineering & of Decision Sciences, Committee Member

Amir Etemadi, Assistant Professor of Engineering and Applied Science, Committee Member

ii

Dedication

I wish to dedicate this work to my wife, Ms. Debra Daniels who supported me throughout this endeavor and my mother, Ms. Sandra J. Gilbertson, who passed away before I completed this work, but inspired me to always keep learning.

In addition to proving to myself that I could do this, I wanted to demonstrate to my children and their spouses; Jason & Irina, Daniel & Tabitha; my step-children and their spouses, Eric, Melynda & Shawn; and my grandchildren, Elizabeth, Declan, and Finley, that it is never too late to learn.

iii

Acknowledgements

I would like to acknowledge the assistance, advice and guidance provided by Drs.

Bereket Tanju, Timothy Eveleigh, Thomas A. Mazzuchi, Shahram Sarkani, Steven

Stuban, and Jason Dever from the Department of Engineering Management & Systems

Engineering (EMSE), School of Engineering and Applied Science (SEAS) of The George

Washington University.

I would like to thank Dr. Jimmie McEver, John Hopkins University Applied Physics

Laboratory, and Dr. John MacCarthy, Director, Systems Engineering Education Program,

Institute for Systems Research, University of Maryland-College Park, for taking time to meet and discuss my research. Much of the research methodology and approach was developed while teaching at the A. James Clark School of Engineering, University of

Maryland for Dr. MacCarthy – an experience for which I am truly grateful.

I would also like to thank Dr. Sarah Sheard, Software Engineering Institute, Carnegie

Mellon University, and Dr. Brian E White, MITRE (Retired), for sharing their work electronically via ResearchGate and providing comments along the way.

This dissertation would have been much different without my George Washington

University systems engineering cohort classmates Dr. Alan Ravitz and Dr. Blake Roberts.

Alan peaked my interest in medical systems and introduced several technologies that helped significantly. Blake provided a sounding board and support throughout the entire process. Thank you, gentlemen.

Finally, I would not have thought that obtaining a PhD degree while working was possible if not for Dr. Jason Siebel who shared his positive experiences with the George

Washington University’s SEAS/EMSE program during a time we worked together.

iv

Abstract

Applying the Cynefin Sense-Awareness Framework to Develop a Systems Engineering Method Diagnostic Assessment Model (SEMDAM)

Different classes of problems warrant different classes of solutions. There is no agreed set of unified principles and models to support systems engineering use over a wide range of domains. Nor is there a set of consistent terminology and definitions. These two deficiencies impede the adoption of systems engineering and create problems. On schedule delivery of a system meeting stakeholder needs at an acceptable cost is dependent upon selection and application of a system engineering method (SEM) appropriate for the class of system problem (COSP). Real world problems possess a degree of complexity that requires a commensurately complex approach as stakeholders are demanding increasingly capable systems that are growing in complexity, yet complexity-related system misunderstanding is at the root of significant cost overruns and system failures. INCOSE and IEEE recommend system complexity as a basis for selection and tailoring of SE processes; however, neither society provides a definition of complexity nor a methodology for SEM selection. Selection of a complexity appropriate

SEM is dependent on understanding COSP which is currently difficult to define, observe, or measure. This research develops a diagnostic assessment model (DAM), based on the

Cynefin framework, that infers COSP and then recommends a complexity appropriate

SEM to reduce system miscategorization and therefore reduce the risk of system failure.

An empirical healthcare case study is used to demonstrate SEMDAM’s application and efficacy.

v

Table of Contents

Dedication ...... iii

Acknowledgements ...... iv

Abstract ...... v

Table of Contents ...... vi

List of Figures ...... xii

List of Tables ...... xiv

List of Acronyms ...... xv

1 Introduction ...... 1

1.1 GENERAL DESCRIPTION OF THE PROBLEM ...... 4

1.2 MAJOR RESEARCH QUESTIONS ...... 7

1.3 SIGNIFICANCE & JUSTIFICATION ...... 8

1.4 SCOPE AND LIMITATIONS ...... 9

1.5 OVERVIEW OF DISSERTATION ...... 10

2 Literature Review ...... 12

2.1 SE THEORETICAL FOUNDATIONS ...... 15

2.1.1 Theories from Philosophy ...... 16

2.1.2 Theories from Classical Sciences ...... 18

2.1.3 Theories from Systems Science ...... 20

2.1.4 Summary of SE Theoretical Foundations ...... 31

2.2 DEFINITION OF SYSTEM USED ...... 32

2.2.1 System Life Cycle Model ...... 33

2.2.2 System Function ...... 34

2.2.3 System Structure...... 35

vi

2.2.4 System Behavior ...... 36

2.3 ENGINEERING ORDERED SYSTEMS (EOS) ...... 37

2.3.1 EOS Standards of Practice ...... 40

2.3.2 Classical Sciences Assumptions Underpinning EOS ...... 42

2.3.3 Codifying TSM ...... 48

2.3.4 Codifying SoSM ...... 50

2.4 ENGINEERING UN-ORDERED SYSTEMS (EUOS) ...... 54

2.4.1 EUOS Standards of Practice ...... 55

2.4.2 Codifying ESM ...... 56

2.4.3 Codifying CSM ...... 59

2.5 CYNEFIN SENSE-MAKING FRAMEWORK ...... 60

2.5.1 Introduction to Sense-Making ...... 60

2.5.2 History of the Cynefin Framework ...... 61

2.5.3 Cynefin Complexity Domains ...... 65

2.5.4 Cynefin Summary ...... 72

2.6 DEFINING SYSTEM & MODEL COMPLEXITY ...... 73

2.6.1 Definition for System Complexity ...... 74

2.6.2 Complexity of Technology Elements ...... 75

2.6.3 Complexity of Process Elements ...... 76

2.6.4 Complexity of Human Elements ...... 77

2.6.5 Complexity of Environments ...... 78

2.6.6 Combining People, Process, Technology and Environment Complexity ...... 79

2.7 MODELLING COMPLEXITY ...... 82

vii

2.7.1 Definition of Model Complexity ...... 83

2.7.2 Identification of SEMs...... 84

2.7.3 Alignment of COSP and SEMs ...... 86

2.8 LITERATURE REVIEW SUMMARY ...... 89

3 SEMDAM Methodology ...... 90

3.1 RESEARCH DESIGN ...... 91

3.1.1 Research Context ...... 92

3.1.2 Methodological Formulation ...... 94

3.1.3 Alternatives Considered ...... 98

3.1.4 Plans for Validation ...... 108

3.2 SEMDAM INTRODUCTION & SYSTEMATIC DESCRIPTION ...... 112

3.2.1 Step 1 – Gather Evidence of SE&M Activity ...... 115

3.2.2 Step 2 – Evaluate Hypothesis ABD1 & Interpret if COSP is Disorder ...... 116

3.2.3 Step 3 – IV&V Program and SE Management...... 117

3.2.4 Step 4 – IV&V Business or Mission Analysis...... 118

3.2.5 Step 5 – Obtain Evidence for Cause and Effect Analysis ...... 119

3.2.6 Step 6 – Evaluate Hypothesis ABD2 & Interpret if COSP is Known ...... 121

3.2.7 Step 7 – Evaluate Hypothesis ABD3 & Interpret if COSP is Knowable ...... 122

3.2.8 Step 8 – Evaluate Hypothesis ABD4 & Interpret if COSP is Complex ...... 123

3.2.9 Step 9 – Evaluate Hypothesis ABD5 & Interpret if COSP is Chaos ...... 124

3.2.10 Step 10 – Recommend SEM based on Inferred COSP ...... 125

3.3 ATTRIBUTE SELECTION METHOD ...... 126

3.3.1 Valuable, Nontrivial, and Measurable Data...... 127

viii

3.3.2 Attribute Data Types ...... 127

3.3.3 Identification of Key Decision Attributes by Design of Experiment ...... 128

3.3.4 Identification of Key Decision Attributes by Observational Study ...... 130

3.3.5 Describing Trends in Attribute Data ...... 132

3.4 STATISTICAL MODEL SELECTION ...... 132

3.4.1 Linear or Logistic Regression ...... 133

3.4.2 Analysis of Variance (ANOVA) ...... 134

3.4.3 Two-Sample t-Test ...... 135

3.4.4 Data Qualification Methods ...... 135

3.5 CHAPTER SUMMARY ...... 138

4 SEMDAM Applied to an Empirical Case Study ...... 139

4.1 USNHC – EMPIRICAL CASE STUDY OVERVIEW ...... 140

4.1.1 Evidence of SE&M or other Assessments ...... 147

4.1.2 Stakeholder Needs ...... 148

4.1.3 Statement of the Problem or Opportunity ...... 149

4.1.4 Sources of Data (Attributes) ...... 150

4.2 APPLYING SEMDAM TO USNHC CASE STUDY ...... 156

4.2.1 Task 1 (Step 1) – Gather Evidence of SE&M Activity ...... 156

4.2.2 Task 2 (Step 2) – Evaluate Hypothesis ABD1 – COSP is Disorder ...... 156

4.2.3 Task 3 (Step 3) – IV&V Program & Systems Engineering Management ....157

4.2.4 Task 4 (Step 4) – IV&V Business or Mission Analysis ...... 157

4.2.5 Task 5 (Step 5) – Obtain Evidence for Cause and Effect Analysis ...... 158

4.2.6 Task 6 (Step 6) – Evaluate Hypothesis ABD2 – COSP is Known ...... 160

ix

4.2.7 Task 7 (Step 5) – Obtain Evidence for Cause and Effect Analysis ...... 161

4.2.8 Task 8 (Step 7) – Evaluate Hypothesis ABD3 – COSP is Knowable ...... 165

4.2.9 Task 9 (Step 5) – Obtain Evidence for Cause and Effect Analysis ...... 166

4.2.10 Task 10 (Step 8) – Evaluate Hypothesis ABD4 – COSP is Complex ...... 172

4.2.11 Task 11 (Step 10) – Recommend SEM based on Inferred COSP ...... 172

4.3 SUMMATION OF USNHC CASE STUDY ...... 173

4.3.1 ‘Striking a Balance’ between ‘Uses and Disclosures’ ...... 173

4.3.2 Improving the Operations of the Health Care Systems ...... 174

4.3.3 Reducing Administrative Costs ...... 175

4.3.4 USNHC Empirical Case Study Summary & Conclusions ...... 176

5 Synthesis & Discussion ...... 181

5.1 CHAPTER INTRODUCTION ...... 181

5.2 OPPORTUNITIES AND CHALLENGES OF SEMDAM ...... 181

5.2.1 Generalized Typology of SEMs ...... 182

5.2.2 Development of a COSP Sensing Framework ...... 183

5.2.3 Development of SEMDAM ...... 184

5.2.4 Demonstrate SEMDAM via Empirical Case Study ...... 186

5.3 CHAPTER SUMMARY ...... 187

6 Conclusions ...... 188

6.1 SUMMARY & FINDINGS ...... 188

6.1.1 Summary of SE Theoretical Foundations ...... 189

6.1.2 Ordered vs. Un-Ordered Systems ...... 190

6.1.3 Models & Patterns – Identifying the “Right Level” ...... 191

x

6.1.4 Identifying COSP using Classification versus Characteristics ...... 192

6.1.5 Objective and Subjective System Complexity ...... 193

6.2 AREAS FOR FUTURE RESEARCH ...... 193

6.2.1 Incorporating Uncertainty ...... 194

6.2.2 Adoption and Periodic Use of SEMDAM in a Program ...... 195

6.2.3 Further Research into Extrinsic and Intrinsic regulators ...... 195

References ...... 199

Appendix A COBPS ...... 224

A-1 REVIEW OF BREACH NOTIFICATION REQUIREMENTS ...... 224

A-1.1 Requirements: Notification to the Media ...... 225

A-1.2 Requirements: Notification to the Secretary ...... 225

A-2 LONGITUDINAL STUDY OF HHS/OCR BREACH PORTAL...... 225

A-2.1 Longitudinal Study Activities ...... 226

A-2.2 Kind of Breach Analysis (‘Electronic’ vs ‘Physical’) ...... 228

A-2.3 Nature of Breach Analysis (‘Malicious’ vs ‘Negligent’) ...... 228

A-2.4 Identification of State for Responsible Breach Entity ...... 228

A-3 SUMMARY ...... 229

xi

List of Figures

FIGURE 1-1: RESEARCH AND DISSERTATION ORGANIZATION FOR DEVELOPING SYSTEM

ENGINEERING METHOD DIAGNOSTIC ASSESSMENT MODEL (SEMDAM) ...... 10

FIGURE 2-1: A SYSTEM ENGINEERING “METHOD” IS UTILIZED TO DELIVER A “SYSTEM” .. 14

FIGURE 2-2: GRAPHICAL VIEW ASHBY’S THEORY OF REQUISITE VARIETY ...... 24

FIGURE 2-3: SYSTEM WITH ASHBY’S THEORY OF REQUISITE VARIETY ...... 36

FIGURE 2-4: MAKING A PREDICTION OR PERCEIVING A TREND ...... 37

FIGURE 2-5: DEVELOPMENT TIMELINE FOR EOS STANDARDS OF PRACTICE ...... 41

FIGURE 2-6: SNOWDEN'S LANDSCAPE OF MANAGEMENT PROVIDES INSIGHT INTO HIS

INITIAL RESEARCH INTERTWINING ONTOLOGY AND EPISTEMOLOGY ...... 61

FIGURE 2-7: 2003 VERSION OF THE CYNEFIN FRAMEWORK BY KURTZ AND SNOWDEN ..... 63

FIGURE 2-8: 2007 VERSION OF THE CYNEFIN FRAMEWORK BY SNOWDEN AND BOONE ..... 64

FIGURE 2-9: DOMAINS OF UN-ORDERED, ORDERED AND DISORDER ...... 66

FIGURE 2-10: ORDERED DOMAINS INCLUDE THE SIMPLE DOMAIN AND THE

COMPLICATED DOMAIN ...... 67

FIGURE 2-11: UN-ORDERED DOMAINS INCLUDE THE COMPLEX DOMAIN AND THE

CHAOTIC DOMAIN ...... 67

FIGURE 2-12: REPRESENTATION OF SYSTEM OF INTEREST (SOI) ...... 75

FIGURE 2-13: GRAPHICAL SUMMARY OF COMPLEXITY MEASUREMENTS FOR SYSTEM

STATES, TECHNOLOGY, PROCESS, PEOPLE/WORKFORCE, AND ENVIRONMENT ...... 80

FIGURE 2-14: PROPOSED TYPOLOGY OF SEMS IN RELATION TO COMPLEXITY ...... 86

FIGURE 3-1: THE INCOSE COMPLEX SYSTEMS WORKING GROUP USE OF THE CYNEFIN

FRAMEWORK TO IDENTIFY CLASSES OF SYSTEMS PROBLEMS ...... 93

FIGURE 3-2: MITRE'S ENTERPRISE SYSTEMS ENGINEERING PROFILER IS ORGANIZED

INTO FOUR QUADRANTS AND THREE RINGS...... 100

FIGURE 3-3: EXAMPLE SITUATIONAL ASSESSMENT FROM SEA PROFILER ...... 102

FIGURE 3-4: SHEARD'S TYPES OF COMPLEXITY FRAMEWORK APPLIED TO ENTITIES ...... 103

FIGURE 3-5: DEDUCTIVE REASONING BEGINS WITH THEORY AND IS THEREFORE

DEPENDENT ON EXISTENCE OF ACCEPTABLE THEORY ...... 105

FIGURE 3-6: INDUCTIVE REASONING BEGINS WITH OBSERVATION AND IS THEREFORE

DEPENDENT ON EXISTENCE OF MEASUREMENT ...... 106

xii

FIGURE 3-7: WHITE ON HOW PRACTICE DRIVES THEORY IN THE YEARS SINCE 1950 ...... 110

FIGURE 3-8: RELATIVE DIFFICULTY OF ENGINEERING VARIOUS TYPES OF SYSTEMS ...... 110

FIGURE 3-9: SYSTEM ENGINEERING METHOD DIAGNOSTIC ASSESSMENT MODEL (SEMDAM) ...... 114

FIGURE 3-10: GENERAL MODEL OF A PROCESS OR SYSTEM ...... 129

FIGURE 3-11: SEMDAM ATTRIBUTE SELECTION IS BASED ON SIX SIGMA’S

PROGRESSIVE AND ITERATIVE APPROACH TO IDENTIFY CRITICAL OUTCOMES ...... 132

FIGURE 3-12: CHOOSING AN ANALYSIS: ATTRIBUTION SELECTION IMPACTS

STATISTICAL METHOD SELECTION ...... 133

FIGURE 4-1: FUNCTIONS PERFORMED BY THE VARIOUS TYPES OF MARKETPLACES ...... 145

FIGURE 4-2: HEALTHCARE.GOV AND ITS SUPPORTING SYSTEMS ...... 146

FIGURE 4-3: TWO PRIMARY INDIVIDUAL INTERACTIONS WITH USNHC: ACQUIRE

MEDICAL COVERAGE & OBTAIN MEDICAL ASSISTANCE ...... 167

FIGURE 4-4: USNHC CONGRESSIONAL AND HHS INITIATIVES OVERLAYING HEALTH

CARE SPENDING FOR 13 HIGH-INCOME COUNTRIES ...... 180

FIGURE 6-1: ASHBY’S THEORY OF REQUISITE VARIETY REFLECTING BOTH INTRINSIC

AND EXTRINSIC REGULATORS ...... 197

FIGURE A-1: ACTIVITY SUMMARY OF THE LONGITUDINAL STUDY TO DEVELOP THE

CONSOLIDATED HHS/OCR BREACH PORTAL SUMMARY DATA SET (COBPS) ...... 226

xiii

List of Tables

TABLE 2-1: ASHBY’S DEFINED SYSTEM STATES ...... 24

TABLE 2-2: LANGTON & WOLFRAM’S DEFINED STATES FOR CA THAT SUPPORT

UNIVERSAL COMPUTATION ...... 28

TABLE 2-3: TOWERS LEVELS OF APPROPRIATE COMPETENCIES ...... 29

TABLE 2-4: EOS STANDARDS OF PRACTICE ...... 40

TABLE 2-5: ASSUMPTIONS IN ORDERED (TRADITIONAL) SYSTEMS ENGINEERING ...... 43

TABLE 2-6: EUOS STANDARDS OF PRACTICE ...... 55

TABLE 2-7: SUMMARY OF CYNEFIN FRAMEWORK DOMAIN NAMES AND MULTI-

ONTOLOGICAL FOUNDATIONS ...... 72

TABLE 2-8: SUMMARY OF 15504 PROCESS ASSESSMENT RANKING SCALE ...... 77

TABLE 2-9: SUMMARY OF COMPLEXITY MEASUREMENTS FOR SYSTEM STATES,

TECHNOLOGY, PROCESS, PEOPLE/WORKFORCE, AND ENVIRONMENT ...... 79

TABLE 2-10: SEMDAM ALIGNMENT BETWEEN INCOSE’S COMPLEXITY WORKING

GROUP, CYNEFIN’S TYPOLOGY OF OPERATING ENVIRONMENTS AND SEMDAM

CANDIDATE EXPLANATIONS FOR COSP ...... 84

TABLE 2-11: PROPOSED ALIGNMENT BETWEEN INFERRED COSP AND COMPLEXITY

APPROPRIATE SEM ...... 88

TABLE 3-1: DEFINITION OF EVIDENCE E FOR SEMDAM ...... 98

TABLE 3-2: SUMMARY OF ALTERNATIVES CONSIDERED ...... 98

TABLE 3-3: CHARACTERIZATION OF DATA TYPES ...... 128

TABLE A-1: LONGITUDINAL VARIABLES FOR HHS/OCR BREACH PORTAL STUDY ...... 227

TABLE A-2: CONSOLIDATED HHS/OCR BREACH PORTAL SET (COBPS) ...... 230

xiv

List of Acronyms

ANOVA Analysis of Variance BMA Business or Mission Analysis CAS Complex Adaptive Systems COBPS Consolidated OCR Breach Portal Set COSP Class of System Problem CSE Complex Systems Engineering CSM Complex Systems Methods DAM Diagnostic Assessment Model DoE Design of Experiments DV Dependent Variable EHR Electronic Health Record EMR Electronic Medical Record EOS Engineering Ordered Systems ESE Enterprise Systems Engineering ESM Enterprise Systems Methods EUOS Engineering Un-Ordered Systems FDSH Federal Data Services Hub FFE Federally Funded Exchange FM Federal Marketplace HCP HealthCare Provider HCPB HealthCare Provider Breach HIE Health Insurance Exchange HIPAA Health Insurance Portability and Accountability Act HSoS Healthcare System of Systems IA Individuals Affected IEC International Electrotechnical Commission IEEE Institute of Electrical and Electronics Engineers INCOSE International Council on Systems Engineering ISO International Organization for Standardization IV&V Independent Verification and Validation

xv

MBSE Model Based Systems Engineering NHIN National Healthcare Information Network ONC The Office of the National Coordinator for Health IT PHI Protected Health Information PII Personally Identifiable Information PM/SEM Program Management and/or System Engineering and Management PPACA Patient Protection and Affordable Care Act PQC Process Quality Characteristic QHP Qualified Health Plan SBE State Based Exchange SE Systems Engineering SEH Systems Engineering Handbook SEM System Engineering Method SEMDAM System Engineering Method Diagnostic Assessment Model SHEII State Health Environment Information Integrity SOI System of Interest SoSE Systems of Systems Engineering SoSM System-of-Systems Methods SSM Soft Systems Methodology TSE Traditional Systems Engineering TSM Traditional Systems Methods USNHC United States National HealthCare

xvi

1 Introduction

Everybody Talks About the Weather, But Nobody Does Anything About It. Charles Dudley Warner Often attributed incorrectly to Mark Twain

Schlager wrote “increased complexity of systems … has led to an emphasis on the field of systems engineering” adding “the first need for systems engineering was felt when it was discovered that satisfactory components do not necessarily combine to produce a satisfactory system.” (Schlager, 1956) ISO/IEC/IEEE Standard 15288, Systems and software engineering – System life cycle process, states “the complexity of man- made systems has increased to an unprecedented level. This has led to new opportunities, but also to increased challenges for the organizations that create and utilize systems.”

(ISO/IEC/IEEE 15288:2015(E), 2015, p. vii) INCOSE wrote:

Some consider systems engineering to be a young discipline, while others consider it to be quite old. Whatever your perspective, systems and the practice for developing them has existed a long time. The constant through this evolution of systems is an ever-increasing complexity which can be observed in terms of the number of system functions, components, and interfaces and their non-linear interactions and emergent properties. Each of these indicators of complexity has increased dramatically over the last fifty years, and will continue to increase due to the capabilities that stakeholders are demanding and the advancement in technologies that enable these capabilities. (INCOSE, 2014, p. 13) {emphasis added}

von Bertalanffy wrote “modern technology and society have become so complex that the traditional branches of technology are no longer sufficient; approaches of a holistic or systems, and generalist and interdisciplinary, nature became necessary.” (von

Bertalanffy, The History and Status of General , 1972) MITRE wrote

“the complexity we are seeing in the enterprises and systems that MITRE helps engineer requires a spectrum of systems engineering techniques.” (MITRE, 2014, p. 37) Piaszczyk wrote “dealing with the complexity of modern systems requires a complete revision of

1

approaches and methods of systems engineering” adding “the old ways won’t do anymore.” (Piaszczyk, 2011)

Different classes of problems warrant different classes of solutions. Shenhar and

Bonen wrote “One of the difficulties in developing a better understanding of systems engineering is that little distinction has been made in the literature between the system type and its strategic type, or its systems engineering and managerial problems” adding

“both style and systems engineering practice differ with each specific kind of system and that management attitudes must be adapted to the proper system type.” (Shenhar & Bonen, 1997)

While IEEE describes the 15288 system life cycle processes as “a common framework” that “can be applied at any level in the hierarchy of a system’s structure,”

15288 doesn’t include guidance on identification or distinction of system type and/or strategic type stating “users of {15288} are responsible for selecting a life cycle model for the project and mapping the processes, activities, and tasks … into that model” adding

“The parties are also responsible for selecting and applying appropriate methodologies, methods, models and techniques suitable for the project.” (ISO/IEC/IEEE

15288:2015(E), 2015, p. 2) Orasanu and Shafto wrote that system design mistakes occur when people misclassify or misdiagnose situations. (Orasanu & Shafto, 2009)

Maier questioned “whether or not there is a useful taxonomic distinction between various complex, large-scale systems that are commonly referred to as systems-of- systems” adding “For there to be a useful taxonomic distinction, we should be able to divide systems of interest into two (or more) classes such that the members of each class

2

share distinct attributes, and whose design, development, or operations pose distinct demands.” (Maier, Architecting Principles for Systems-of-Systems, 1998)

Shenhar and Bonen wrote “Systems engineering and program management must be conducted according to the proper style and be adapted to the system type” adding “when a wrong style is utilized, or the when the system is misclassified, may result in substantial difficulties and delays in the process of the system creation.” (Shenhar & Bonen, 1997)

Gilbertson, Tanju and Eveleigh wrote “misclassification of systems may impact successful deployment.” (Gilbertson, Tanju, & Eveleigh, 2017) Maier described system misclassification as “incorrectly regarding a system-of-systems (SoS) as a monolithic system, or the reverse” warning future SEs “they may use inappropriate mechanisms for ensuring collaboration and may assume cooperative operations across administrative boundaries that will not reliably occur in practice.” (Maier, Architecting Principles for

Systems-of-Systems, 1998) Shenhar and Bonen wrote “adapting the wrong system and management style may cause major difficulties during the process of system creation.”

(Shenhar & Bonen, 1997) INCOSE describes a limitation of systems engineering practice in that it, “is only weakly connected to the underlying theoretical foundation” stressing that “understanding the foundation enables the systems engineer to evaluate and select from an expended and robust toolkit.” (INCOSE, 2014, p. 40)

Shenhar and Bonen wrote “the creation of complex man-made systems probably has its historical roots in early civilization.” (Shenhar & Bonen, 1997) INCOSE wrote

“advancements in technology not only impact the kinds of systems that are developed, but also the tools used by systems engineers” adding “system failures have provided lessons that impact the practice, and factors related to the work environment remind us

3

that systems engineering is a human undertaking.” (INCOSE, 2014, p. 13) Conway’s

Law states “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations” adding “the design which occurs first is almost never the best possible, the prevailing system concept may need to change.” (Conway, 1968)

The most specific description of the need for SE to understand the class of system problem (COSP) and select an appropriate system engineering method (SEM) is provided by DeRosa, Grisogono, Ryan, and Norman who posit that it is not possible to engineer complex systems using non-complex processes:

Real world problems that display certain properties, possess a degree of complexity that require a commensurately complex approach. Theoretical support for this assertion can be drawn from Ashby's Law of Requisite Variety, which demonstrates that a system must have sufficient variety – and consequently sufficient complexity – for the problem it is designed to solve, and from Bar Yam's proof that the functional complexity of a system scales exponentially with the complexity of environmental variables. Together, these theorems imply that genuinely complex needs can only be met by a sufficiently complex system. (DeRosa, Grisogono, Ryan, & Norman, 2008)

1.1 General Description of the Problem

One of the negative trends in the global environment affecting the state of SE practice is the lack of a set of consistent terminology and definitions. INCOSE wrote:

There is no agreed set of unified principles and models to support systems engineering use over a wide range of domains. Nor is there a set of consistent terminology and definitions. These two deficiencies impede the adoption of systems engineering and create problems. (INCOSE, 2007, p. 10)

Standards of SE practice from the world’s two leading engineering societies,

INCOSE and IEEE, address multiple SEMs without definition or selection criteria:

 INCOSE addresses the application of SE principles for systems, systems of

systems (SoS) and enterprise systems (INCOSE SEH, 2015, pp. 8, 25, 175);

and,

4

 IEEE highlights the need to understand the differences in engineering systems

and SoS with the inclusion of Annex G, Application of system life cycle

processes to a system of systems, warning “the complexity of the constituent

systems and the fact they may have been designed without regard to their role

in the SoS, can result in new, unexpected behaviors” adding “identifying and

addressing unanticipated emergent results is a particular challenge in

engineering SoS.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 102)

While SE has evolved from classical or traditional SE (TSE) to include additional SE bodies of knowledge, such as Systems-of-systems engineering (SoSE), there is no agreed upon typology of SEMs nor methodology for SEM selection leaving each SE practitioner to apply personal judgment in selecting and tailoring an appropriate SEM. (Gilbertson,

Tanju, & Eveleigh, 2017) IEEE specifically excludes the process of SEM selection writing “users of this International Standard {15288} are responsible for selecting a life cycle model for the project and mapping the processes, activities, and tasks … into that model” adding “The parties are also responsible for selecting and applying appropriate methodologies, methods, models and techniques suitable for the project.” (ISO/IEC/IEEE

15288:2015(E), 2015, p. 2) INCOSE uses System of Interest (SOI) as a generic undifferentiated system classification abstraction representing: system element, system, system-of-systems, and/or systems-of-systems, writing that a challenge of system definition “is to understand what level of detail is necessary” adding “because SOIs are in the real world, this means that the response to this challenge will be domain specific.”

(INCOSE SEH, 2015, p. 7) INCOSE continues:

The art of defining a hierarchy within a system relies on the ability of the systems engineer to strike a balance between clearly and simply defining span of control and

5

resolving the structure of the SOI into a complete set of system elements that can be implemented with confidence. (INCOSE SEH, 2015, p. 8)

Both societies cite complexity as a basis for selection, application, and tailoring of

SEMs and process(es); yet, neither includes a definition of complexity to apply. Both societies identify the importance of complexity:

 INCOSE wrote “the appropriate degree of formality in the execution of any

SE process activity” is determined in part by “the degree of complexity” and

includes ‘complexity’ as one of the system fundamentals from a systems

science context perspective; (INCOSE SEH, 2015, pp. ix, 18)

 INCOSE wrote “Because an SoS is itself a system, the systems engineer may

choose whether to address it as either a system or as an SoS, depending on

which perspective is better suited to a particular problem.” (INCOSE SEH,

2015, p. 8) adding, “appropriate degree of formality in the execution of any

SE process activity” should be based on the SE’s perceived need for

communication, level of uncertainty, degree of complexity and consequences

to human welfare.” (INCOSE SEH, 2015, p. ix)

 IEEE wrote “the detail of the life cycle implementation within a project is

dependent upon the complexity of the work, the methods used, and the skills

and training of personnel involved in performing the work.” (ISO/IEC/IEEE

15288:2015(E), 2015, p. 24)

INCOSE wrote that “stakeholders are demanding increasingly capable systems that are growing in complexity, yet complexity-related system misunderstanding is at the root of significant cost overruns and system failures” adding “there is broad recognition that there is no end in sight to the system complexity curve.” (INCOSE, 2014, p. 29) INCOSE

6

wrote SE “must scale and add value to a broad range of systems, stakeholders, and organizations with diversity of size and complexity” while avoiding the perception of SE processes as “burdensome, heavyweight efforts, leading to unjustified cost and time overheads.” (INCOSE, 2014, p. 25; INCOSE, 2007, p. 14)

1.2 Major Research Questions

Delivery of a successful system at an acceptable cost is dependent upon use of a SEM from a typology of SEMs grounded in SE theory; appropriate for the COSP based on evidence and logic not characteristics and assumptions; identified using a demonstrable, diagnostic assessment model (DAM). Therefore, the research objectives are:

 Development of a generalizable typology of existing and emerging set of

SEMs based on SE theory verified by descriptive method in Section 2.3.3,

Codifying TSM; Section 2.3.4, Codifying SoSM; Section 2.4.2, Codifying

ESM; Section 2.4.3, Codifying CSM; and Section 3.1.2.2, Defining candidate

explanations H1, …, Hn and qualitative method in Chapter 5;

 Development of a COSP sensing framework verified by descriptive method in

Section 2.5, Cynefin sense-making Framework, and Section 3.1.2.2, Defining

candidate explanations H1, …, Hn and qualitative method in Chapter 5;

 Development of a generalizable diagnostic assessment model (DAM) to

assess COSP and recommend a complexity appropriate SEM verified by

descriptive and correlational methods in Chapter 3 and qualitative method in

Chapter 4; and,

 Demonstration of System Engineering Method Diagnostic Assessment Model

(SEMDAM) verified by qualitative method in Chapter 4.

7

1.3 Significance & Justification

Sheard et al., wrote “Systems engineers’ toolkits should include a wide range of methods and processes to address environmental and system complexity in appropriate and useful ways. (Sheard, et al., A Complexity Primer for Systems Engineers, 2016)

Delivery of a successful service or system at an acceptable cost is dependent upon selection and use of an appropriate SE method suitable for the problem. This research applies the Cynefin sense awareness framework to develop a diagnostic assessment model (DAM) using complexity measured by association of a priori prediction by program management (PM)/system engineering management (SEM) of system output or a posteriori perception by PM/SEM of system response to recommend an appropriate SE method to apply for a broad range of SE problems of increasing complexity. Sheard et al., wrote:

A key first step is one of diagnosis – the systems engineer must identify the kind and extent of complexity that bears on the problem set. As we have seen, complexity can exist in the problem being addressed, in its environment or context, or in the system under consideration for providing a solution to the problem. The diagnoses made will allow the systems engineer to tailor his/her approaches to key aspects of the systems engineering process: requirements elicitation, trade studies, the selection of a development process life cycle, solution architecting, system decomposition and subsystem integration, test and evaluation activities, and others. (Sheard, et al., A Complexity Primer for Systems Engineers, 2016)

Sheard wrote “no comprehensive measure of complexity is widely used in the systems engineering field today” but wrote “many benefits would come from having a well-understood way to quantify the complexity of a design or a development effort.”

Sheard concluded “this is a field of great promise. The questions of how to measure complexity and how to use the measure to mitigate project problems would have a huge impact if solved.” (Sheard S. A., Assessing the Impact of Complexity Attributes on

System Development Project Outcomes, 2012, pp. 19, 178)

8

1.4 Scope and Limitations

Using the Cynefin framework where complexity domains are associated with observation of cause and effect, SEMDAM provides a recommendation for a COSP appropriate SEM based on measuring statistically significant association for a priori prediction of system outcomes or a posteriori perception of system response. This research is based on observational study and not design of experiments; therefore,

SEMDAM’s statistical models look for correlation rather than causality. SEMDAM is based on abductive reasoning, also referred to as diagnosis, which typically begins with an incomplete set of observations and proceeds to the likeliest possible explanation.

Analogous to a medical diagnosis, SEMDAM does not guarantee selection of the “best”

SE method. Rather, SEMDAM provides a recommendation of an appropriate SE method based on the PM/SEM understanding of COSP. Since SEMDAM has both predictive and perspective elements, passage of time is simulated by evaluating current abductive logic statements as though they were conducted concurrently or previously as appropriate.

There is no consistent agreement within the SE community on a SE method ontology.

While there is broad SE community agreement on the need to extend SE to better cope with increasing complexity, there is no consensus on the boundaries or scope(s) of the emerging SE methods. This research into developing a decision analysis model for system selection considered TSM, SoSM, ESM, and CSM which may need to be revisited as existing SE methods evolve and/or new SE methods emerge. In addition to the complexity domains, the Cynefin framework describes transitions from one complexity domain to another which was beyond the scope of this research.

9

1.5 Overview of Dissertation

Modern practitioners of systems engineering have multiple SE methods available for use in delivering modern systems. While the concept of complexity is integral to understanding how and when to apply appropriate SE methods, there is no agreed upon definition of complexity nor is there established best practice for selecting an SE methodology. Assuming that traditional SE methodologies work for all systems is a high- risk approach. This dissertation develops a complexity based diagnostic assessment model, based on the Cynefin Framework, as a novel approach to recommend an appropriate SE domain to eliminate or reduce misclassifying systems and by extension system failure. An overview of this dissertation is presented using the Vee Model in

Figure 1-1 to provide perspective for the document chapters and appendix.

Figure 1-1: Research and Dissertation Organization for Developing System Engineering Method Diagnostic Assessment Model (SEMDAM)

10

Following this introductory chapter, this document is organized as follows:

 Chapter 2, Literature Review, describes research related to the concepts of

systems, SE methodologies, and system complexity;

 Chapter 3, SEMDAM Methodology, describes the research design, defines

SEMDAM, and describes attribute and statistical model selection;

 Chapter 4, SEMDAM Applied to an Empirical Case Study, introduces and

analyzes the U.S. National Healthcare (USNHC);

 Chapter 5, Synthesis & Discussion, summarizes the findings from application

to validate the SEMDAM methodology;

 Chapter 6, Conclusions, presents conclusions and areas for future research;

 Appendix A – COBPS, summarizes development of Consolidated OCR

Breach Summary (COBPS).

11

2 Literature Review

In a certain sense, it can be said that the notion of system is as old as European philosophy. Man, in early culture, and even primitives of today, experience themselves as being “thrown” into a hostile world, governed by chaotic and incomprehensible demonic forces which, at best, may be propitiated or influenced by way of magical practices. Philosophy and its descendant, science, was born when the early Greeks learned to consider or find, in the experienced world, an order or kosmos which was intelligible and, hence, controllable by thought and rational action. (von Bertalanffy, 1972)

Chapter 2 is organized around: philosophies and theories that provide a foundation for

SE; current and evolving SE methods (SEMs) that identify and describe bodies of SE expert knowledge, processes, and tools; and, a sense-awareness framework to identify the class of system problem (COSP). Contributions from other technical and managerial disciplines are presented within the scope of the associated SE topic most applicable.

Merriam-Webster defines theory as “a plausible or scientifically acceptable general principle or body of principles offered to explain phenomena.” (Merriam-Webster, 2017)

Nilsen wrote “a theory may be defined as a set of analytical principles or statements designed to structure our observation, understanding and explanation of the world” describing a theory as a combination of “definitions of variables, a domain where the theory applies, a set of relationships between the variables and specific predictions” adding “A ‘good theory’ provides a clear explanation of how and why specific relationships lead to specific events.” (Nilsen, 2015) Section 2.1, SE Theoretical

Foundations describes Theories from Philosophy, Theories from Classical Sciences, and

Theories from Systems Science.

INCOSE wrote “Systems engineering has evolved from a combination of practices used in a number of related industries” adding “SE practices are still largely based on heuristics.” (INCOSE SEBoK, 2016, p. 28) Merriam-Webster defines heuristic as

12

“involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods.” (Merriam-Webster, 2017) INCOSE wrote “Most systems engineers are practitioners, applying processes and methods… tailored to their domain’s unique problems” adding “these processes and methods have evolved to capture domain experts’ knowledge regarding the best approach to applying

SE.” (INCOSE SEBoK, 2016, p. 71)

SE literature contains a recurring bifurcation of systems methods exemplified by

Ramos, Ferreira and Barcelo who wrote, “Surprisingly, in a recent and evolving field, there are already references to ‘the old SE’ (or the traditional, the classical, the ordered) and ‘the new SE’.” (Ramos, Ferreira, & Barcelo, 2012) Section 2.3, Engineering Ordered

Systems (EOS), describes established SE methods proven effective in solving system and system-of-systems class problems using processes and methods based on theories from classical science subject to the assumptions that underlie those theories. Section 2.4,

Engineering Un-Ordered Systems (EUOS), describes emerging SE methods that are focused on enterprise and complex class problems using processes and methods based on the theories from systems science.

INCOSE wrote “Systems engineering continues to evolve in response to a long history of increasing system complexity” adding “Much of this evolution is in the models and tools focused on specific aspects of SE.” (INCOSE SEBoK, 2016, p. 28) Nilsen wrote:

A model typically involves a deliberate simplification of a phenomenon or a specific aspect of a phenomenon. Models need not be completely accurate representations of reality to have value. Models are closely related to theory and the difference between a theory and a model is not always clear. Models can be described as theories with a more narrowly defined scope of explanation; a model is descriptive, whereas a theory is explanatory as well as descriptive. (Nilsen, 2015)

13

The Oxford dictionary contains two definitions of system used in this research: (1) “a set of things working together as parts of a mechanism or an interconnecting network; a complex whole,” and (2) “a set of principles or procedures according to which something is done; an organized scheme or method.” (Oxford Dictionaries, 2017) As shown in

Figure 2-1, “system” describes the intended result (either a system or a service) per the first definition while “method” describes the use of a specific body of system engineering knowledge, discipline(s), process(es), framework(s), approach(s) or system development life cycle(s) (SDLC) to structure, plan, develop, and deliver a System per for the second.

Figure 2-1: A System Engineering “Method” is utilized to deliver a “System”

 Traditional Systems Method (TSM) – This term is used to describe delivery of a

Traditional System (also called Classical system in literature) using Traditional

Systems Engineering (TSE) described in Section 2.3.3, Codifying TSM;

 System-of-Systems Method (SoSM) – This refers to delivery of a System-of-

Systems (SoS) using Systems-of-Systems Engineering (SoSE) described in

Section 2.3.4, Codifying SoSM;

 Enterprise Systems Method (ESM) – This refers to delivery an Enterprise System

(ES) using Enterprise Systems Engineering (ESM) described in Section 2.4.2,

Codifying ESM; and,

 Complex Systems Method (CSM) – This refers to delivery of a Complex System

(CS) using Complex Systems Engineering (CSE) described in Section 2.4.3,

Codifying CSM.

14

This research leverages the Cynefin sense-awareness framework – described in

Section 2.5, Cynefin sense-making Framework – which uses complexity as the basis for categorization identifying the class of system problem (COSP) faced by a leader or decision maker. Nilsen wrote:

A framework usually denotes a structure, overview, outline, system or plan consisting of various descriptive categories, e.g. concepts, constructs or variables, and the relations between them that are presumed to account for a phenomenon. Frameworks do not provide explanations; they only describe empirical phenomena by fitting them into a set of categories. (Nilsen, 2015)

Chapter 2, Literature Review, addresses the “lack of a set of consistent terminology and definitions” for SEMs based on fundamental SE theories and the lack of a consistent approach to identify COSP based on a useful and measurable definition of system complexity and contains descriptive research on various types of Systems and associated types of System Engineering available at the time of this study as pretext for Section

3.1.2.2, Defining candidate explanations H1, …, Hn, and Section 3.1.2.3, Inferring COSP

‘Given Evidence E’.

2.1 SE Theoretical Foundations

Review of literature is an important aspect of SE research as Warfield wrote

“virtually every important concept that backs up the key ideas emergent in systems literature is found in ancient literature and in the centuries that follow.” (INCOSE SEH,

2015, p. 17) INCOSE wrote “To bridge the gap between different domains and communities of practice, it is important to first establish a well-grounded definition of the

‘intellectual foundations of systems engineering’, as well as a common language to describe the relevant concepts and paradigms” describing the need to “provide a framework and language that allow different communities, with highly divergent world-

15

views and skill sets, to work together for a common purpose.” (INCOSE SEBoK, 2016, p. 71)

Section 2.1 presents a subset of the theoretical foundations of SE organized by historical grouping of philosophers and/or theories with similar worldviews or

Weltanschauung. Section 2.1.1, Theories from Philosophy, presents historical theories of

Teleology and Vitalism. Section 2.1.2, Theories from Classical Sciences, presents recent theories of Mechanism and Evolution that to adhere to the ‘system-as-machine’ paradigm and provide the foundation for Engineering Ordered Systems (EOS). Section 2.1.3,

Theories from Systems Science, presents modern theories (i.e., General Systems Theory,

Cybernetics, System Dynamics, etc.) that adhere to the ‘system-as-organism’ paradigm and provide the theoretical foundation for Engineering Un-Ordered Systems (EUOS).

While Complexity Theory is part of Systems Science, due to the importance of complexity in applying the Cynefin framework and developing SEMDAM, Section 2.6,

Defining System & Model Complexity, provides an in-depth treatment of the topic and presents the COSP Complexity construct used in this research.

2.1.1 Theories from Philosophy

Teleology and Vitalism are based on entelechy which is defined as “that which realized or makes actual what is otherwise merely potential.” (Encyclopaedia Britannica,

2017) Gorod, Gandhi, White, Ireland and Sauser wrote “the Greek word sustema stood for reunion, conjunction, or assembly” adding “the concept of system surfaced during the seventeenth century, meaning a collection of organized concepts mainly in the philosophical sense.” (Gorod, Gandhi, White, Ireland, & Sauser, 2015, p. 19) While

16

falling into disfavour with the creation and adoption of theories from Classical Science,

Aristotle’s core concept of emergence experienced a resurgence with Systems Science.

2.1.1.1 Teleology

Aristotle is credited with developing the theory of teleology where each thing {e.g., system} contains a reason or explanation of its end, purpose, or goal. Aristotle taught that all things have a combination of essential properties, which define that thing, and accidental properties which are free to vary. (Dennett, 1996) One of the earliest documented references to this concept of system is from Aristotle in 350 BC, “In the case of all things which have several parts and in which the totality is not, as it were, a mere heap, but the whole is something beside the parts.” (Aristotle, 350 B.C.) von Bertalanffy paraphrased Aristotle’s concept of emergence as “the whole is more than the sum of its parts” and wrote that it “is a definition of the basic system problem which is still valid.”

(von Bertalanffy, The History and Status of General Systems Theory, 1972)

Several centuries later, Kurt Koffka described the situation when “a perceptual system forms a percept or gestalt, the whole thing has a reality of its own, independent of the parts,” which was captured in the phase “the whole is other than the sum of its parts.”

(Dewey, 2007) Peterson described that Aristotle’s concept of entelechy remain valid when he wrote “as pointed out as early in the writing of Aristotle, it is exactly the identification of purpose with the organization of components that defines the approach that we refer to as systems engineering.” (Patterson, 2009, p. 65)

2.1.1.2 Vitalism Theory

Weckowicz wrote “The vitalistic theory of life proposed by Georg Ernst Stahl was a reaction to the simplistic mechanistic theories of the seventeenth century” adding “Stahl

17

in his Theoria Medica Vera (1707) postulated the existence of a vital force, the vitalistic essence, called by him 'soul,' which characterized all living organisms, and more generally all living matter in contradistinction to inanimate matter. The vital force was underlying all life phenomena.” (Weckowicz, 2000) Weckowicz wrote:

Throughout the nineteenth century the field of biology was dominated by the controversy between Mechanistic and Vitalistic theories of life. According to the Mechanists life processes could be completely reduced to physico-chemical events, which meant that they could be fully explained by the laws of physics and chemistry. Consequently, life processes could be fully accommodated within the framework of classical thermodynamics. According to the Vitalists these processes could not be so explained. (Weckowicz, 2000)

The theory of Teleology was a predominate belief system for several thousand years.

Eventually theories from Classical Sciences, discussed next, displaced theories from

Philosophy as part of a larger movement towards classical sciences.

2.1.2 Theories from Classical Sciences

von Bertalanffy wrote “the Scientific Revolution of the sixteenth-seventeenth centuries replaced the descriptive-metaphysical universe epitomized in Aristotle’s doctrine by the mathematical-positvistic or Galilean concept” adding” the vision of the world as a teleological cosmos was replace by the description of events in causal, mathematical laws.” The key ideas within the classical sciences are represented by

Descartes’ second maxim to “break down every problem into as many separate simple elements as might be possible” and Galileo’s resolutive methods to “reduce complex phenomena into elemental parts and processes.” (von Bertalanffy, The History and Status of General Systems Theory, 1972)

Descartes’ postulated that the love of knowledge, and its exploration, by all thinkers would lead to limited progress so he proposed a better approach based on specialization where individuals would focus in specific areas of knowledge. Gorod, White, Ireland,

18

Gandhi and Sauser wrote “this separation, and the related notion of a hierarchy of knowledge, is called reductionism” adding “the implication of reductionism is that a system is no more than the sum of its parts.” (Gorod, White, Ireland, Gandhi, & Sauser,

Preface, 2015, p. xi) The primary theories from Classical Sciences, Mechanism, and

Natural Selection, are described below.

2.1.2.1 Mechanism Theory

Man-made machines, such as steam or combustion engines became the models of living systems. The functioning of these machines could be understood within the framework of classical thermodynamics formulated by Robert Mayer in 1842.

(Weckowicz, 2000). It was hoped by biologists and by physiologists that living systems would also fit the framework of classical thermodynamics and that they could be understood in mechanistic terms as physico-chemical machines. (Weckowicz, 2000)

Weckowicz wrote:

In the seventeenth century under the influence of the Cartesian, Galilean and Newtonian theories in physics, and pioneering discoveries in chemistry a tendency developed to explain life processes in mechanistic terms. Either as systems of pulleys and levers, or as hydraulic systems activated by pressure of fluids. Alternative explanations were offered by chemists who explained life processes in terms of fermentations or in terms of acids interacting with alkali. (Weckowicz, 2000)

von Bertalanffy described mechanism as “robot model” where behavior was

“explained by the mechanistic stimulus-response schedule; conditions, according to the pattern of animal experiment, appears as the foundation of human behavior; ‘meaning’ was replaced by conditioned response; and specificity of human behavior was to be denied.” (von Bertalanffy, General System Theory, 1972, p. 188)

19

2.1.2.2 Theory of Evolution

Darwin’s Theory of Evolution, The Origin of Species, was published in 1859.

Darwin’s objective was to identify one classification scheme, based on a historical approach where “species are not eternal and immutable; they have evolved over time and can give birth to new species in turn.” (Dennett, 1996) Arnold and Fristrup wrote

“hierarchical structure has long been fundamental to our understanding of biology, both in the anatomy of individuals and in the systematic classification of individuals into higher level aggregates” adding “species, populations, individuals, and genes are widely recognized as fundamental expressions of the evolutionary process.” (Arnold & Fristrup,

1982) Dennett wrote:

Darwin succeeded not only because he documented his ideas exhaustively but also because he grounded them in a powerful theoretical framework. In modern terms, he had discovered the power of an algorithm which is a formal process that can be counted on to yield a certain kind of result whenever it is ‘run’ or instantiated. (Dennett, 1996)

Darwin attributed development of new species to: accumulation of chance minimal variations; to random genetic mutations; and to the pressure of natural selection.

(Weckowicz, 2000)

2.1.3 Theories from Systems Science

INCOSE wrote “systems science is an integrative discipline bringing together research of systems with the goal of identifying, exploring, and understanding patterns of complexity to provide a common language and an intellectual foundation for systems engineering” adding “Research in systems science attempts to compensate for the inherent limitations of classical science, most notably the lack of ways to deal with emergence.” (INCOSE SEH, 2015, p. 18)

20

While Classical Science theory may be summed up by the phrase “a system is no more than the sum of its parts,” Systems Science takes a very different perspective that may be summarized as “the whole is more than the sum of its parts.” von Bertalanffy wrote “in order to understand an organized whole we must know both the parts and the relations between them.” (von Bertalanffy, The History and Status of General Systems

Theory, 1972)

INCOSE wrote “The systems approach (derived from systems thinking) and systems engineering (SE) have developed and matured, for the most part, independently” adding

“therefore, the systems science and the systems engineering communities differ in their views as to what extent SE is based on a systems approach and how well SE uses the concepts, principles, patterns and representations of systems thinking.” (INCOSE

SEBoK, 2016, p. 179)

2.1.3.1 General System Theory (GST)

General System Theory (GST) was codified when Braziller published von

Bertalanffy’s General System Theory: Foundations, Development, Applications in 1972.

(von Bertalanffy, General System Theory, 1972) This book consolidated papers previously published from 1940 to 1969. (von Bertalanffy, The History and Status of

General Systems Theory, 1972) Much of the modern SE body of knowledge leverages the concept of system that is attributed to the work of von Bertalanffy (1901 - 1972) who wrote “If you take any realm of biological phenomena … you will always find that the behavior of an element is different within from what it is in isolation” adding “You cannot sum up the behavior of the whole from the isolated parts.” (von Bertalanffy,

General System Theory, 1972, p. 68)

21

A biologist, von Bertalanffy, developed general systems theory (GST) while,

“attempting to build a bridge between natural sciences and humanities.” (Weckowicz,

2000) When von Bertalanffy started writing about general system theory in the 1940s relatively little attention was paid to him; however, scientists have become interested in his research and impressed by this promising effort to find common laws applying to such widely diverse subjects as biology, economics, psychology, and demography.

Developments in information theory, computer and technology, and cybernetics are related to general system theory and have contributed to it. (Weckowicz, 2000) wrote:

Throughout the nineteenth century the field of biology was dominated by the controversy between Mechanistic and Vitalistic theories of life. According to the Mechanists life processes could be completely reduced to physico-chemical events, which meant that they could be fully explained by the laws of physics and chemistry. Consequently, life processes could be fully accommodated within the framework of classical thermodynamics. According to the Vitalists these processes could not be so explained. The Vitalists assumed that in life processes in addition to the physical and chemical forces there was operating a specific agent which was only present in living matter. (Weckowicz, 2000)

Ludwig von Bertalanffy is mainly remembered as the originator of the open systems theory in biology, a theory which rejected both the mechanistic and the vitalistic explanations of life processes. (Weckowicz, 2000) von Bertalanffy wrote the Theory of

Open Systems “consists of the scientific exploration of "wholes" and “wholeness” which, not so long ago, were considered to be metaphysical notions transcending the boundaries of science.” (von Bertalanffy, The History and Status of General Systems Theory, 1972)

2.1.3.2 Cybernetics Theory

Weiner first coined the term “cybernetics” in 1948 to describe “the most fruitful areas for the growth of the sciences were those which had been neglected as a no-mans’ land between various established fields” adding “there are fields of scientific work what have been explored from different sides of pure mathematics, statistics, electrical engineering,

22

and neurophysiology; in which every single notion receives a separate name from each group.” (Wiener, 1961, p. 2) von Bertalanffy wrote that cybernetics “is a theory of control systems based on communication (transfer of information) between system and environment and within the system, and control (feedback) of the system’s function in regard to environment.” (von Bertalanffy, General System Theory, 1972, p. 21)

Ashby published An Introduction to Cybernetics in 1956 writing “Cybernetics was defined by Wiener as ‘the science of control and communication, in the animal and the machine – in a word, as the art of steermanship, and it is to this aspect that the book will be addressed.” (Ashby, 1956, p. 1) Ashby wrote “we shall examine the process of regulation itself, with the aim of finding out exactly what is involved and implied” adding

“we shall develop ways of measuring the amount or degree of regulation achieved, and we shall show that this amount has an upper limit.” (Ashby, 1956, p. 202) Ashby’s states the Law as “only variety can destroy variety;” (Ashby, 1956, p. 207) however, a more useable representation is that in order for a system to remain stable, the variety in the regulator V must be at least as great as the variety in the disturbance being regulated V.

Ashby’s Theory of Requisite Variety, shown in Figure 2-2, states:

V ≥ V − V (1)

Where:

V = Variety of Outcome measured logarithmically (i.e., Output)

V = Variety of Disturbance measured logarithmically (i.e., Input)

V = Variety of Regulator measured logarithmically (i.e., Feedback) 푇 = Table (i.e., System Function)

23

Figure 2-2: Graphical View Ashby’s Theory of Requisite Variety

Ashby’s Theory of Requisite Variety defines the set of states of operation for systems shown in Table 2-1.

Table 2-1: Ashby’s Defined System States System Condition Meaning States The regulator contains requisite variety to Stable VO ≥ VD – VR control the outcome for a given disturbance The regulator does not contain requisite variety Unstable VO < VD – VR to control the outcome for a given disturbance

2.1.3.3 Hard & Soft Systems Perspectives Methodologies

INCOSE SEBoK wrote “Hard systems views of the world are characterized by the ability to define purpose, goals, and missions that can be addressed via engineering methodologies in an attempt to, in some sense, “optimize” a solution” adding “In hard system approaches the problems may be complex and difficult, but they are known and can be fully expressed by the investigator.” (INCOSE SEBoK, 2016, p. 108)

INCOSE SEBoK wrote “Soft systems views of the world are characterized by complex, problematical, and often mysterious phenomena for which concrete goals cannot be established and which require learning in order to make improvement” adding

“Soft system approaches reject the idea of a single problem and consider problematic

24

situations in which different people will perceive different issues depending upon their own viewpoint and experience.” (INCOSE SEBoK, 2016, p. 108) Checkland wrote:

The word ‘system’ is used simply as a label for something taken to exist in the world outside ourselves. The taken-as-given assumption is that the world can be taken to be a set of interacting systems, some of which do not work very well and can be engineering to work better. In the thinking embodied in SSM the taken-as-given assumptions are quite different. The world is taken to be very complex, problematic, mysterious. However, our coping with it, the process of inquiry into it, it is assumed, can itself be organized as a learning system. Thus, the use of the work ‘system’ is no longer applied to the world, it is instead applied to the process of our dealing with the world. It is the shift of systemicity (or ‘systemness’) from the world to the process of inquiry into the world which is the crucial intellectual distinction between the two fundamental forms of systems thinking, ‘hard’ and ‘soft’.

It the literature it is often stated that ‘hard’ systems thinking is appropriate in well- defined technical problems and that ‘soft’ systems thinking is more appropriate in fuzzy ill-defined situations involving human beings and cultural consideration. (Checkland, 2000)

2.1.3.4 Systems Thinking Methodology

Systems thinking is a methodology for considering an entire system by viewing inputs that are processed to generate outputs. According to INCOSE, “systems thinking is the discovery of patterns” adding that a systems thinker “identifies the circular nature of complex cause-and-effect relationships.” (INCOSE SEH, 2015, p. 20) Senge wrote that,

“businesses and other human endeavors are also systems … bound by invisible fabrics of interrelated actions, which often take years to fully play out their effects on each other” adding “systems thinking is a conceptual framework, a body of knowledge and tools that has been developed over the past fifty years, to make the full patterns clearer, and to help us see how to change them effectively.” (Senge, 2000)

2.1.3.5 Complexity Theory

Sheard et al., wrote, “complexity is nothing new to systems engineering and managers” adding, “in ordinary language, we often call something complex when we

25

can’t fully understand its structure or behavior: it is uncertain, unpredictable, complicated or just plain difficult.” (Sheard, et al., INCOSE Complex Systems Working Group White

Paper - A Complexity Primer for Systems Engineers, 2015) The definition of complexity used in this research and its’ derivation is presented in Section 2.6, Defining System &

Model Complexity, below.

Manson wrote “advocates of complexity theory see it as a means of simplifying seemingly complex systems. The actual practice of complexity theory, however, is anything but simple in that there is no one identifiable complexity theory.” (Manson,

2001) While complexity theory began as a new way of working with mathematical models, the body of knowledge has adopted the precept that complexity is more a way of thinking about the world. Like SE, theoretical research into complexity has multiple variations or areas of interest. The next set of complexity theories are variations of theoretical research into the general subject of complexity each of which has been adapted into SE literature.

2.1.3.6 Computational Complexity Theory

Dodder and Dare wrote “one apparently crucial element in any reasonable measure of complexity is the information processed or exchanged by the system under study” adding

“Shannon’s information theory uses this quantity of information as an indicator of complexity. Another widely explored measure is the Algorithmic Information Content

(AIC), which relates complexity to the minimum amount of information needed to describe the system, as measured by the shortest computer program that can generate that system.” (Dodder & Dare, 2000) Langton researched cellular automata (CA) which he defined as “discrete space/time logical universes, obeying their own local physics”

26

adding “cellular automata can be viewed either as computers themselves or as logical universes, within which computers may be embedded.” (Langton, 1990, pp. 25, 35)

Langton published his dissertation on complexity science, The Edge of Chaos, which

“exhibits the nature of a complex system by neither order nor disorder.” (Gorod, Gandhi,

White, Ireland, & Sauser, 2015, p. 22) Langton’s research focused on providing

“evidence for the existence of ‘critical’ phase-transitions in the space of CA and for the identification of a transition regime with the most complex CA dynamics, those which support universal computation” adding “it means that information processing is likely to become an important factor in the dynamics of physical systems when they are in the vicinity of a phase-transition between ordered and disordered behavior.” (Langton, 1990, p. 6) Langton wrote “there have been other attempts to characterize those CA rules that would support universal computation” adding “the best known of these is due to Wolfram who proposed four qualitative classes of CA behavior.” (Langton, 1990, p. 3) Langton summarized the primary finding writing:

By defining the appropriate parameter {Lambda} over the space of possible CA rules, and by using this parameter to step through the space of CA rules in an ordered fashion, one passes through the spectrum of dynamical behaviors in the following order:

Fixed-point =» periodic =» “complex” =» chaotic

This corresponds to passing through the Wolfram classes in the following order:

I =» II =» IV =» III

This association between phase-transitions and computations provides a new perspective on computation in general, one which reveals a simple and elegant structure among what previously amounted to a large and unorganized collection of loosely related theorems, lemmas, theses, and observations. (Langton, 1990, p. 7)

Langton asserts that “there is a fundamental connection between phase-transitions and computations – in particular between critical dynamics and universal computation.”

27

(Langton, 1990, p. 5) Table 2-2 contains the spectrum of phase transitions for CA for both Langton and Wolfram.

Table 2-2: Langton & Wolfram’s Defined States for CA that support Universal Computation Langton’s Wolfram’s Meaning CA States CA States Fixed point Class I Relax to a homogeneous fixed-point Relax to a heterogeneous fixed point or to short- Periodic Class II period behavior Support complex interactions between localized Complex Class IV structures, often exhibiting long transients Chaotic Class III Relax to chaotic, random behavior

2.1.3.7 Complexity in Organizational Theory

Conway wrote “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations” adding

“This criterion creates problems because the need to communicate at any time depends on the system concept in effect at that time. Because the design which occurs first is almost never the best possible, the prevailing system concept may need to change.”

(Conway, 1968)

Rhodes, Lamb and Nightingale wrote “society as a whole is faced with increasing complex systems problems in critical infrastructure, energy, transportation, communications, defense, and other areas” describing empirical research on systems thinking and the practice of systems engineering practice as “understanding how to achieve more effective SE practice through understanding of the context in which SE is performed and the factors underlying the competency of the systems workforce.”

(Rhodes, Lamb, & Nightingale, 2008) Understanding ‘the context in which SE is performed’ is discussed in Section 2.2.3, System Structure, below.

28

Towers described enablers for model based systems engineering (MBSE) as: architecture frameworks, process frameworks, people, and tools where the people involved have the appropriate workforce competencies shown in Table 2-3.

Table 2-3: Towers Levels of Appropriate Competencies Practices Work Type Skill Level How to Achieve Best “Assembly Line” Proficiency Training Good Information Fluency Training & Experience Emergent Knowledge Literacy Deliberate Practice Novel Concept Mastery Deliberate Practice (10,000 hrs) (Towers, 2016, p. 17)

Gladwell provides verification for Towers’ uniquely specific association of mastery at

10,000 hours writing “the idea that excellence at performing a complex task requires a critical minimum level of practice surfaces again and again in studies of expertise.

Researchers have settled on what they believe is the magic number for true expertise” adding “the emerging picture from studies in that ten thousand hours of practice is required to achieve the level of mastery associated with being a world-class expert in anything.” (Gladwell, 2008, p. 40)

2.1.3.8 Complex Adaptive Systems Theory

Gorod, Gandhi, White, Ireland, and Sauser wrote “the genesis of complex systems theory can be traced back to the cybernetics movement which started during World War

II” citing the works of Weiner and Ashby presented in Section 2.1.3.2, Cybernetics

Theory, adding “Langton (1990) called the complexity science “The Edge of Chaos,” which exhibits the nature of a CS (Complex System) as characterized by neither order nor disorder.” (Gorod, Gandhi, White, Ireland, & Sauser, 2015, p. 19)

29

Dodder and Dare wrote “The rise of complex adaptive systems (CAS) as a school of thought took hold in the mid-1980s with the formation of the Santa Fe Institute, a New

Mexico think tank formed in part by former members of the nearby Los Alamos National

Laboratory” adding “One important emphasis with CAS is on the crossing of traditional disciplinary boundaries. CAS provides an alternative to the linear, reductionist thinking that has ruled scientific thought since the time of Newton.” (Dodder & Dare, 2000)

Hayenga cited Kelly (1994) when he wrote “the key insight uncovered by the study of complex systems in recent years is this: the only way for a system to evolve into something new is to have a flexible structure” adding “a tiny tadpole can change into a frog, but a 747 Jumbo Jet can’t add six inches to its length without crippling itself.”

(Hayenga, 2008)

Sheard and Mostashari wrote “Management and technical work are much more meshed in complex systems work than in standard mechanistic engineering of small systems” recommending that technical managers “plot a complex situation on the

Cynefin instrument to identify patterns of problems and solutions.” (Sheard &

Mostashari, Principles of Complex Systems for Systems Engineering, 2009) The Cynefin sense-awareness framework is described in Section 2.5, Cynefin sense-making

Framework, below.

2.1.3.9

Chaos theory is a sub-discipline of the more general complexity theory that uses mathematics to study complex systems that are highly sensitive to initial conditions or rounding errors within computational systems – sometimes referred to as the butterfly effect – even when the underlying system is deterministic. Originally developed at MIT

30

to study weather, the theory was summarized by Lorenz who wrote “Chaos: When the present determines the future, but the approximate present does not approximately determine the future.” (Lorenz, 1963)

2.1.4 Summary of SE Theoretical Foundations

Much of the body of SE knowledge contained within IEEE 15288 and INCOSE SEH is based on the theories from classical sciences. Sheard identified that systems engineering practices deal for the most part with improving order primarily by using the same set of assumptions of order from Newtonian and Mechanical Systems. Sheard called this ordered systems engineering (OSE) which is discussed in in Section 2.3,

Engineering Ordered Systems (EOS). (Sheard S. , Complex Adaptive Systems in Systems

Engineering and Management, 2009, p. 1284) Sillitto wrote about the multiple fields of study ascribed to systems science and potential application to SE when he wrote:

Reviewing these lists and discussions, and following a taxonomy set out by Bertalanffy it seems that they can be aggregated into four broad categories:

 Science of real systems governed by the laws of physics,

 Science of real systems governed by the “laws” of biological,

 Science of real systems governed by the “laws” of social behaviour; and,

 Science of conceptual systems, governed by the laws of mathematics and logic.

These can be referred to in short-hand as the science of “physical”, “biological”, “social” and “conceptual” systems. (Sillitto H. , 2012)

Sheard and Mostashari wrote “many aspects of the practice of systems engineering have become standardized across the industry. This is not to suggest, however, that this practice{SE} is based on an overarching and complete systems engineering theory”

31

adding “theory has found too little traction in improving systems engineering practice.”

(Sheard & Mostashari, A Complexity Typology for Systems Engineering, 2010)

2.2 Definition of System Used

Recall that system is used interchangeably to either identify a thing “set of things working together as parts” (i.e., engineering a system, system of interest, enabling system) or “a set of principles or procedures” (i.e., systems engineering, systems analysis, systems thinking, etc). (Oxford Dictionaries, 2017) IEEE 15288 demonstrates the various uses writing “Systems Engineering is an interdisciplinary approach and means to enable the realization of successful systems” or “This International Standard provides a common process framework for describing the life cycle of systems created by humans, adopting a

Systems Engineering approach.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. vii) The last example shows Systems associated with life cycle which is discussed below.

Given the potential for confusion, this research will annotate external references use of system as necessary to associate meaning from: system (thing), system (principles) or system (processes). Unless otherwise noted, use of the phrase system refers to system

(thing) which aligns with as “combination of interacting elements organized to achieve one or more stated purposes.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 9) Sillitto wrote and INCOSE incorporated that a system {thing} has the following:

A life cycle – Life Cycle is used throughout SE literature as either the phases a project pass thorough, principles and/or procedures; (Sillitto H. , 2012)

Function – What a system does; systems exist to deliver functionality; (Sillitto H. G., 2009)

Structure – Defined as a boundary, a set of parts, and the set of relationships and potential interactions between the parts of the system and across the boundary (interfaces); (INCOSE SEH, 2015, p. 20; Sillitto H. , 2012)

32

Behavior – the way in which someone or something functions or acts or reacts to a stimulus, or responds to situations; Including state change and exchange of information, energy and resources; and, (Sillitto H. G., 2009; Sillitto H. , 2012)

Performance characteristics – associated with function and behavior in given environmental conditions and system states. (INCOSE SEH, 2015, p. 20; Sillitto H. G., 2009; Sillitto H. , 2012)

INCOSE wrote “Systems contain multiple feedback loops with variable time constants” adding “cause‐and‐effect relationships may not be immediately obvious or easy to determine.” (INCOSE SEH, 2015, p. 21)

2.2.1 System Life Cycle Model

INCOSE wrote “Every man‐made system has a life cycle, even if it is not formally defined” adding “A life cycle can be defined as the series of stages through which something (a system or manufactured product) passes.” (INCOSE SEH, 2015, p. 25)

IEEE wrote “every system has a life cycle. A life cycle can be described using an abstract functional model that represents the conceptualization of a need for the system, its realization, evolution and disposal” adding “The organization may then employ this environment to perform and manage its projects and progress systems through their life cycle stages.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 9)

The phase life cycle is used throughout SE literature to convey either the phases a project pass thorough, principles and/or procedures. IEEE 15288 wrote “Life cycles vary according to the nature, purpose, use and prevailing circumstances of the system. Each stage has a distinct purpose and contribution to the whole life cycle and is considered when planning and executing the system life cycle” adding “the typical system life cycle stages include concept, development, production, utilization, support, and retirement.”

(ISO/IEC/IEEE 15288:2015(E), 2015, p. 14) Inclusion of typical life cycle stages is discordant with the warning provided regarding life cycle models:

33

This International Standard does not prescribe a specific system life cycle model, development methodology, method, model or technique. The users of this International Standard are responsible for selecting a life cycle model for the project and mapping the processes, activities, and tasks in this International Standard into that model. The parties are also responsible for selecting and applying appropriate methodologies, methods, models and techniques suitable for the project. (ISO/IEC/IEEE 15288:2015(E), 2015, p. 2)

The use of life cycle appears to refer to both the life cycle phases and life cycle models with ‘defined series of stages’ {when tasks are performed} and the SE methodology, defined as a collection of related processes, methods, and tools used during each state

{how tasks are performed}. (INCOSE SEH, 2015, p. 90) This research posits that a system should not be viewed as having a life cycle; rather, selection of an appropriate life cycle {method} should depend on the system. Likewise, external reference to life cycle may be annotated to indicate life cycle (phases) or life cycle (models). Unless otherwise noted, use of Life Cycle means life cycle (model) while System Engineering Method

(SEM) is intended to refer to system (processes) and/or life cycle (model).

2.2.2 System Function

System function is the embodiment of stakeholder needs and the requirements definition process which “defines the stakeholder requirements for a system than can provide the capabilities needed by users and other stakeholders in a defined environment.” (INCOSE SEH, 2015, p. 52) INCOSE describes system function as “the functional boundaries of the system and the functions the system must perform,”

(INCOSE SEH, 2015, p. 281) adding:

The functionality of a system is typically expressed in terms of the interactions of the system with its operating environment, especially the users. When a system is considered as an integrated combination of interacting elements, the functionality of the system derives not just from the interactions of individual elements with the environmental elements but also from how these interactions are influenced by the organization (interrelations) of the system elements. (INCOSE SEH, 2015, p. 6)

34

While this definition of system functionality “speaks to both the internal and external views of the system,” (INCOSE SEH, 2015, p. 6) it is based on a ‘defined environment’ which is based on defining the system boundary which is part of System Structure, discussed next.

2.2.3 System Structure

Three concepts define system structure: the boundary; the elements and the interactions between and among the elements and in and out of the boundary. (INCOSE

SEH, 2015, p. 20) INCOSE wrote “The internal and external views of a system give rise to the concept of a system boundary. In practice, the system boundary is a “line of demarcation” between the system itself and its greater context (to include the operating environment)” adding “It defines what belongs to the system and what does not.”

(INCOSE SEH, 2015, p. 20) Sillitto wrote “Sometimes we are interested in a particular property of interest” adding “Once we have established the property of interest, the system of interest and corresponding system boundary can be determined by finding the set of parts and relationships that are necessary and sufficient to account for the property or properties of interest.” (Sillitto H. , 2012) INCOSE SEBoK described the process of identifying a System of Interest (SOI) writing:

When humans observe or interact with a system, they allocate boundaries and names to parts of the system. This naming may follow the natural hierarchy of the system, but will also reflect the needs and experience of the observer to associate elements with common attributes of purposes relevant to their own. This way of observing systems wherein the complex system relationships are focused around a particular system boundary is called systemic resolution. (INCOSE SEBoK, 2016, p. 121)

IEEE 15288 defines systems elements as “hardware, software, data, humans, processes (e.g., processes for providing service to users), procedures (e.g., operator instructions), facilities, materials and naturally occurring entities.” (ISO/IEC/IEEE

35

15288:2015(E), 2015, p. 1) INCOSE SEH wrote “A system is made up of parts that interact with each other and the wider context” adding “The parts may be any or all of hardware, software, information, services, people, organizations, processes, services, etc.” (INCOSE SEH, 2015, p. 79) Interactions include both exchange of information with external systems across the system boundary and interactions among and between system elements within the system boundary.

2.2.4 System Behavior

System behavior includes state changes and exchange of information. System

Behavior is related to and dependent on system function. (INCOSE SEH, 2015, p. 20)

INCOSE wrote “a system is in a state when the values assigned to its attributes remain constant or steady for a meaningful period of time.” (INCOSE SEH, 2015, p. 6) INCOSE wrote “Systems contain multiple feedback loops with variable time constants, so that cause‐and‐effect relationships may not be immediately obvious or easy to determine.”

(INCOSE SEH, 2015, p. 21) Figure 2-3 shows an updated system representation using the system nomenclature of this section integrated with the graphical representation of

Ashby’s Theory of Requisite Variety.

Figure 2-3: System with Ashby’s Theory of Requisite Variety

36

Observing system behavior as states or state changes is integral to the SEMDAM requirement to be able to predict a future response or perceive a trend in a series of recent outputs. This concept of “meaningful period of time” raises the question of duration of observation or window of time which will be addressed in Section 2.6, Defining System

& Model Complexity. Ashby wrote “the most fundamental concept in cybernetics is that of difference, either that two things are recognizably different or that one thing has changed with time” adding “Every machine or dynamic system has many distinguishable states. If it is a determinate machine, fixing its circumstances and the state it is at will determine, i.e., make unique, the state it next moves to.” (Ashby, 1956, pp. 1, 27) This research is concerned with either an a priori prediction of future system outcomes or a perception of a posteriori trends as shown in Figure 2-4.

Figure 2-4: Making a Prediction or Perceiving a Trend

2.3 Engineering Ordered Systems (EOS)

Sheard wrote “systems engineering practices, as they are currently taught, address mostly the ‘order’ aspect of complex systems” in order “to increase predictability, control, system performance, and success of system development efforts” adding “we analyze using Gaussian or bell curve distributions and we plan, predict, and control in order to achieve a desired outcome.” (Sheard S. , Complex Adaptive Systems in Systems

37

Engineering and Management, 2009, p. 1283) Schlager wrote “the first need for systems engineering was felt when it was discovered that satisfactory components do not necessarily combine to produce a satisfactory system” adding “it should not seem unusual to the find the first appearance of the term systems engineering in those industries which produced systems complexity at an early date” citing Bell Telephone Laboratories.

(Schlager, 1956)

Commercial use of the telephone began in 1876 between two instruments; however, it wasn’t until the 1940s that Bell Telephone Laboratories coined the phrase ‘systems engineering’ to describe the conception, design, development, production and operation of a national phone network comprised of physical telephony components (e.g., instruments {phones}, transmissions lines, switches, etc.) that had been in use in various forms and configurations for over 50 years.

INCOSE wrote “In the Apollo program, NASA added a ‘Module Level’ in the hierarchy to breakout the Command Module, Lunar Module, etc. of the Space Vehicle

Element” noting that “Simple systems typically have fewer levels in the hierarchy than complex systems.” This led to the introduction of Systems‐of‐Systems (SoS) as “systems‐ of‐interest whose system elements are themselves systems; typically, these entail large‐ scale inter‐disciplinary problems involving multiple, heterogeneous, distributed systems.

These interoperating collections of component systems usually produce results unachievable by the individual systems alone.” (INCOSE SEH, 2011, p. 11) Complex and complicated are not synonymous. Hayenga wrote “quite often, we hear systems with many thousands of interacting parts, like the Space Shuttle or B-2 bomber, described as complex” adding “we need to consider those with many interacting parts to be

38

complicated rather than complex.” (Hayenga, 2008) DeRosa, Grisogono, Ryan and

Norman wrote:

Complex and complicated systems are often confused. The essence of complexity is interdependence. Interdependence implies that reduction by decomposition can't work, because the behavior of each component depends on the behaviors of the others. The Latin root of complex means to weave, whereas the root of complicated means to fold. Complicated systems can be unfolded into simpler components- decomposition works, while complex ones cannot be so easily unwoven. Thus, the opposite of complicated is simple, while the opposite of complex is independent. (DeRosa, Grisogono, Ryan, & Norman, 2008)

Rhodes and Hastings wrote “the classical definitions of Systems Engineering are fairly similar in nature, with some variation regarding reference to it as a practice, process, method, or approach” citing Chase (1974):

Systems Engineering is the process of selecting and synthesizing the application of the appropriate scientific and technical knowledge to translate system requirements into system design and subsequently to produce the composite of equipment, skills, and techniques that can be effectively employed as a coherent whole to achieve some stated goal or purpose. (Rhodes & Hastings, 2004)

White described conventional methods writing, “these methods generally assume that the solution is primarily a system of hardware and software, that requirements are fully understood from the start, that the organization in charge of the system solely controls its development and configuration, and that the external environment can be represented by interface specifications for machine interactions” (White B. E., 2010) When initiated at

Bell Labs and later adopted by other organizations attempting to solve problems of unprecedented size and growing complexity, traditional or classical systems engineering was seen as a branch of engineering in the classical sense, that is, as applied only to physical systems, such as a national phone system or space craft.

39

2.3.1 EOS Standards of Practice

EOS standards of practice are documented in a broad array of standards, handbooks, academic literature, and web-resources, focusing on a variety of domains documented in

Table 2-4. Figure 2-5 presents the timeline for the development of EOS standards highlighting the gradual incorporation of SoSM into the TSM baseline set of artifacts.

Table 2-4: EOS Standards of Practice Source Topic TSM SoSM (Sage & Rouse, Handbook Extensive discussion of all facets of of Systems Engineering and engineering of systems. X X Management, 2009) (INCOSE SEH, 2015) Provides over on Systems Systems Engineering Engineering Handbook (SEH) Describes generic life cycle states Detailed presentation of Technical, X X Technical Management, Agreement and Organizational Project-Enabling Processes (ISO/IEC/IEEE Defines discipline and practice of SE 15288:2015(E), 2015) Identifies Hierarchy Levels of Systems X X Validates Definition of SoS (8) (INCOSE SEBoK, 2016) Frequently updated SE knowledge Guide to the SE Body of Foundations of SE; SE and Knowledge (SEBoK) Management; Applications of SE; X X Enabling SE; Related Disciplines Implementation Examples (MITRE, 2014) MITRE Systems Engineering Guide X X Guide Organization SEH (Maier, Architecting Identification of Five Principal Principles for Systems-of- Characteristics; Categories of Systems, 1996); Managerial Control; and, X Identification of Architectural Principles for SoS (Maier, Architecting Reduction to Two Principle Principles for Systems-of- Characteristics of SoS X Systems, 1998) Identifies types of system misclassification (Dahmann & Baldwin, 2008) Identification of Category of X Managerial Control for SoS

40

41

Figure 2-5: Development Timeline for EOS Standards of Practice

2.3.2 Classical Sciences Assumptions Underpinning EOS

A discussion on the assumptions underlying classical sciences is important in that the theory of systems engineering for ordered systems (EOS) is largely based on the theories of Classical Sciences. Ryan wrote that Descartes “described a scientific method, the adherence to which he hoped could provide privileged access to truth” adding “the second rule of analytic reduction, and the third rule of understanding the simplest objects and phenomena first, provided the view of scientific explanation as decomposing the problem into simple parts to be considered individually, which could then be re- assembled to yield an understanding of the integrated whole.” (Ryan, 2008) Rebovich wrote:

Classical systems engineering is a sequential, iterative development process used to produce systems and sub-systems, many of which are of unprecedented technical complication and sophistication. The INCOSE Systems Engineering process is a widely recognized representation of classical systems engineering.

An implicit assumption of classical systems engineering is that all relevant factors are largely under the control of or can be well understood and accounted for by the engineering organization, the system engineer, or the program manager and this is normally reflected in the classical systems engineering mindset, culture, and processes. (Rebovich Jr., The Evolution of Systems Engineering, 2008)

With adoption and application of theories from classical sciences comes acceptance of assumptions that underpin Classical Sciences which were due in part to the simplifications required to solve complex mathematics by hand. Sheard wrote

“engineering and scientists generally considered nonlinear equations to be intractable” adding “scientists from Galileo to Newton had to assume that nonlinearities … were small enough to be neglected.” (Sheard S. , Complex Adaptive Systems in Systems

Engineering and Management, 2009, p. 1289) Regarding assumptions or conditions that are part of classical sciences, von Bertalanffy wrote:

42

The first is that interactions between "parts" be non-existent or weak enough to be neglected for certain research purposes. Only under this condition, can the parts be "worked out," actually, logically, and mathematically, and then be "put together."

The second condition is that the relations describing the behavior of parts be linear; only then is the condition of summativity given, i.e., an equation describing the behavior of the total is of the same form as the equations describing the behavior of the parts. (von Bertalanffy, General System Theory, 1972, p. 18)

Table 2-5 contains Sheard’s four key assumptions for ordered/traditional systems:

Table 2-5: Assumptions in Ordered (Traditional) Systems Engineering

Assumption Implications Newtonian Linear analysis (possibly perturbed) mechanics Avoid three-body and higher-order problems Equilibrium conditions Thermodynamics applies: no energy transfer, highest entropy Gaussian distributions (bell curves) predominate Decomposition Break down system into subsystems and, further, allocate requirements, build, and integrate Pay special attention to interfaces Machine mental models: predict, control, and conform Hierarchical Distribution of complexity among workers, except at the top management Focus on efficiency, predictability, control Waterfall or waterfall-derived development life cycles Specification-based acquisition Separation of management and technical Stable No long-term changes environment Few transients, and those treated as perturbations Specifications assume known and stable environment (Sheard S. , Complex Adaptive Systems in Systems Engineering and Management, 2009, p. 1287)

Each of the four major assumptions of Classical Systems is described in turn presenting first the theoretical basis or justification for the simplification or assumption followed by discussion adoption and potential impacts on EOS.

2.3.2.1 Assumption of Newtonian Mechanics

Bertalanffy wrote “Classic science was concerned with one-way causality or relations between two variables as even the three-body problem permits no closed solution by

43

analytical methods of classical mechanics.” (von Bertalanffy, The History and Status of

General Systems Theory, 1972) Weinberg wrote:

Newton’s Law of Universal Gravitation states that the force of attraction between two bodies is in no way dependent on the present of a third body. As it happens, the solar system has one body (the sun) whose mass is march larger than any other masses, larger, in fact, than the mass of all the other bodies together. Because of this dominant mass, the pair equations not involving the sun’s mass yield forces small enough to be ignored, at least considering the accuracy of the data Newton was trying to explain. (Weinberg, 1975, p. 9)

While ignoring the many objects in the cosmos and instead focusing on the very small number of objects close enough to matter, Newton was able to use classical science to predict the motion of near-by planets. Conversely, classical studies of gases, made of molecules required the opposite assumption – that the interesting measurements were a few average properties of the molecules, rather than the exact properties of any one molecule which could not be measured. (Weinberg, 1975, p. 14)

The Law of Large Numbers assumes that systems that are complex, but yet sufficiently random in their behavior so that they are sufficiently regular to be studied statistically, states that the larger the population, the more likely one is to observe values that are close to the predicted average values. (Weinberg, 1975, p. 14) von Bertalanffy described the use of statistics based on randomness and the Law of Large Numbers as

“unorganized complexity” adding “there loomed the problem of ‘organized complexity’ that is interrelations between many but not infinitely many components.” (von

Bertalanffy, The History and Status of General Systems Theory, 1972)

According to Norman and Kuras, “Traditional systems engineering (TSE) has its foundations in Linear System Theory. Key ideas are proportionality, superpositioning, the existence of invertible functions (i.e., 푥 = 푓(푓(푥)) and the assumption of repeatability.” They add that, “the practice of TSE is the application of a series of linear

44

transformations” where “a hallmark of the process is the ability to justify everything built in terms of the original requirements.” (Norman & Kuras, 2006) Sheard wrote that the assumption of Newtonian mechanics means, “we assume that the systems behave repeatably and predictably.” (Sheard S. , Complex Adaptive Systems in Systems

Engineering and Management, 2009, p. 1287)

2.3.2.2 Assumption of Decomposition

Descartes’ described his philosophy based on four precepts. His second precept, analytic reduction, was to “divide each of the difficulties under examination into as many parts as possible, and as might be necessary or its adequate solution.” (Ryan, 2008)

INCOSE wrote that, “the best way to understand a complicated system is to break in down into parts recursively until the parts are so simple that we understand them” warning that “this approach does not help us to understand a complex system, because the emergent properties that we really care about disappear when we examine the parts in isolation.” (INCOSE SEH, 2015, p. 9) Sheard noted the limitation to this approach stating, “Decomposition necessarily reduces emphasis on the aspects of the system that cannot be broken into small pieces – how the system becomes a whole that is more than the sum of its parts.” (Sheard S. , Complex Adaptive Systems in Systems Engineering and Management, 2009, p. 1288)

DeRosa stated, “TSE is essentially a reductionist process, proceeding logically from requirements to delivery of a system.” (DeRosa, Forward, 2011, p. 2) Sage and Rouse characterized TSE as, “developing processes for systems engineering that allow us to decompose the engineering of a large system into smaller subsystem engineering issues, engineer the subsystems, and then build the complete system as an integrated collection

45

of these subsystems.” (Sage & Rouse, An Introduction to Systems Engineering and

Systems Management, 2009, p. 11) Sheard described decomposition as a methodology to reduce complexity of systems:

Systems engineering presumes, even dictates, that the complexity of a system solution can be reduced by decomposition. We first investigate the requirements for the system. Then we create a system architecture that will satisfy the requirements of the system: in doing so, we define subsystems and how they interact. Then we allocate the system-level requirements to various subsystems. We are then ready to repeat the process at the subsystem level. This process is called decomposition. (Sheard S. , Complex Adaptive Systems in Systems Engineering and Management, 2009, p. 1288)

Sage and Rouse note a limitation to this approach stating that, “Decomposition necessarily reduces emphasis on the aspects of the system that cannot be broken into small pieces – how the system becomes a whole that is more than the sum of its parts.”

(Sage & Rouse, An Introduction to Systems Engineering and Systems Management,

2009) Gilbert and Yearworth wrote:

Systems Engineering development projects often fail to meet delivery expectations in terms of timescales and cost. Project plans, which set cost and deadline expectations, are produced and monitored within a reductionist paradigm, incorporating a deterministic view of cause and effect. This assumes that the cumulative activities and their corresponding durations that comprise the developed solution can be known in advance, and that monitoring and management intervention can ensure satisfactory delivery of an adequate solution, through implementation of this plan. (Gilbert & Yearworth, 2016)

Senge wrote that, “from a very early age, we are taught to break apart problems, to fragment the world. This apparently makes complex tasks and subjects more manageable” but also warned that, “we pay a hidden, enormous price.” (Senge, 2000)

Senge describes limits of decomposition:

We can no longer see the consequents of our actions; we lose our intrinsic sense of connection to the larger whole. When we then try to “see the big picture,” we try to reassemble the fragments in our minds, to list and organize all the pieces. But, as physicist David Bohm says, the task is futile – similar to trying to reassemble the

46

fragments of a broken mirror to see a true reflection. Thus, after a while we give up trying to see the whole together. (Senge, 2000)

2.3.2.3 Assumption of Hierarchical Management

Descartes’ described his philosophy based on four precepts. His third precept, understanding the simplest objects and phenomena first, was to “conduct my thoughts in such order that, by commencing with objects the simplest and easiest to know, I might ascend by little and little, and, as it were, step by step, to the knowledge of the more complex; assigning in thought a certain order even to those objects which in their own nature do not stand in a relation of antecedence and sequence.” (Ryan, 2008)

In the Handbook of Systems Engineering and Management, Shenhar and Sauser wrote, “a simple way to define various levels of complexity is to use a hierarchical framework of systems and subsystems.” (Shenhar & Sauser, 2009, p. 126) Simon wrote:

By a hierarchic system, or hierarchy, I mean a system that is composed of interrelated subsystems, each of the latter being, in turn, hierarchic in structure until we reach some lowest level of elementary subsystem. (Simon, 1962)

2.3.2.4 Assumption of Stable Environment

Karl Deutsch described the assumptions of classical sciences and mechanism writing:

The whole was completely equal to the sum of its parts; which could be run in reverse; which would behave in exactly identical fashion no matter how often these parts were disassembled and put together again;

Irrespective of the sequence in which the disassembling or reassembling would take place, the parts were never significantly modified by each other, nor by their own past; and, each part once placed in its appropriate position with its appropriate momentum, would stay exactly there and continue to fulfil its completely and uniquely determined function. (Weinberg, 1975, p. 4)

Hybertson and Sheard wrote, “the traditional view of system characteristics is that that are relatively stable and predictable.” (Hybertson & Scheard, 2008) According to

Norman and Kuras, TSE begins with the specification of requirements and continues with

47

allocating desired, known functionality among specific elements of a design; all known a priori and stable over time subject to the ability to justify everything built in terms of the original requirements. (Norman & Kuras, 2006) They conclude that, “the practice of TSE seeks to understand the place of an element within the environment, isolate the element under study from the environment, and then treat the environment as a constant.”

(Norman & Kuras, 2006)

2.3.3 Codifying TSM

Norman and Kuras wrote, “traditional system engineering relies on the making of and the fulfilling of predictions” adding that the characteristics a system must meet to be considered a traditional system and thus a candidate for the application of traditional system engineering (TSE) include:

The specific desired outcome must be known a priori, and it must be clear and unambiguous (implied in this is that the edges of the system, and thus responsibility, are clear and known);

There must be a single, common manager who is able to make decisions about allocating available resources to ensure completion;

Change is introduced and managed centrally; and,

There must be “fungible” resources (that is money, people, time, etc.) which can be applied and reallocated as needed. (Norman & Kuras, 2006)

Thus, in its most basic form, the definition of a traditional system and its complexity is related to the recursive decomposition of a larger entity into smaller, more manageable entities where the system engineer can predict the behavior of the system a priori.

DeRosa states that, “TSE is essentially a reductionist process, proceeding logically from requirements to delivery of a system.” (DeRosa, Forward, 2011, p. 2) Sage and

Rouse characterized TSE as, “developing processes for systems engineering that allow us to decompose the engineering of a large system into smaller subsystem engineering

48

issues, engineer the subsystems, and then build the complete system as an integrated collection of these subsystems.” (Sage & Rouse, An Introduction to Systems Engineering and Systems Management, 2009, p. 11) INCOSE defined a system as:

A system is a construct or collection of different elements that together produce results not obtainable by the elements alone. The elements, or parts, can include people, hardware, software, facilities, policies, and documents; that is, all things required to produce systems-level results. The results include system level qualities, properties, characteristics, functions, behavior, and performance. The value added by the system as a whole, beyond that contributed independently by the parts, is primarily created by the relationship among the parts; that is, how they are interconnected (INCOSE, 2016)

INCOSE wrote, “While the concepts of a system can generally be traced back to early

Western philosophy and later to science, the concept most familiar to systems engineering is often traced to Ludwig von Bertalanffy in which a system is regarded as a

“whole” consisting of interacting “parts” adding the systems “are man-made, created and utilized to provide products or services in defined environment for the benefit of users and other stakeholders.” (INCOSE SEH, 2015, p. 5) In ISO/IEC/IEEE 15288, Systems and software engineering – system life cycle processes, IEEE defined a system as:

The systems considered in this International Standard are man-made, created and utilized to provide products or services in defined environments for the benefit of users and other stakeholders. These systems may be configured with one or more of the following system elements: hardware, software, data, humans, processes (e.g., processes for providing service to users), procedures (e.g., operator instructions), facilities, materials, and naturally occurring entities. As viewed by the user, they are thought of a products or services. (ISO/IEC/IEEE 15288:2015(E), 2015, p. 11)

The INCOSE SEH augments the IEEE definition by introducing a System of Interest

(SOI) construct which was recursively defined as a set of elements that are not systems, a set of elements that can be systems, a System of Systems (SoS), or an enterprise as follows:

 an SOI is a system is, “man-made, created, and utilized to provide products or services in defined environment for the benefit of users and other

49

stakeholders” comprised of “an integrated set of elements, subsystems, or assemblies that accomplish a defined objective”;

 an SOI is a system is comprised of system elements that, “can be systems on their own merit”;

 an SOI a system of systems (SoS), “whose elements are managerially and/or operationally independent systems”; or,

 an enterprise which is a, “purposeful combination (e.g., a network) of interdependent resources (e.g., people, processes, organization, supporting technologies, and funding) that interact with each other to coordinate functions, share information allocate funding, create workflows, and make decisions and their environment(s) to achieve business and operational goals through a complex web of interactions distributed across geography and time. (INCOSE SEH, 2015, pp. 5, 7, 8, 176)

2.3.4 Codifying SoSM

Sage and Cuppan challenged TSE in 2001 to, “reconsider our canonical approach to engineering and management of SoS” (Sage & Cuppan, 2001) which evolved into

Systems of systems engineering (SoSE) with the objective to address “shortcoming in the ability to deal with difficulties generated by increasingly complex and interrelated systems of systems.” (Gorod, Sauser, & Broadman, System-of-Systems Engineering

Management: A Review of Modern History and a Path Forward, 2008)

Maier wrote, “While the phrase “system-of-systems” is commonly seen, there is less agreement on what they are, how they may be distinguished from ‘conventional’ systems, or how their development differs from other systems.” (Maier, Architecting Principles for

Systems-of-Systems, 1998)

Maier presented the concept that a system of systems (SoS) exhibits five characteristics: operational and managerial independence, geographic distribution, emergent behavior, and evolutionary development. (Maier, Architecting Principles for

Systems-of-Systems, 1996; Maier, Architecting Principles for Systems-of-Systems, 1998)

50

Maier wrote that a system that does not meet the operational and managerial independence of their components is not to be considered a system-of-systems,

“regardless of the complexity or geographic distribution of its components.” (Maier,

Architecting Principles for Systems-of-Systems, 1998)

Sage and Biemer defined an SoS as, “a large-scale, complex system, involving a combination of technologies, humans, and organizations, and consisting of components which are systems themselves, achieving a unique end-state by providing synergistic capability from its component systems, and exhibiting a majority of the following characteristics: operational and managerial independence, geographic distribution, emergent behavior, evolutionary development, self-organization, and adaptation.” (Sage

& Biemer, 2007)

INCOSE wrote that a SoS is a system, “whose elements are managerially and/or operationally independent systems” and a, “SoS usually exhibits complex behaviors” that may be attributed to the existence of the five characteristics adding that for complex systems, “interactions between the parts exhibit self-organization, where local interactions give rise to novel, nonlocal, emergent patterns.” (INCOSE SEH, 2015, p. 9)

Recognizing that the body of SE knowledge continues to evolve, Version 4 of the

INCOSE SE Handbook introduces system of systems using Maier’s five characteristics of operational independence, managerial independence, geographic distribution, emergent behavior, and evolutional development. (INCOSE SEH, 2015, p. 8)

In describing the difference between TSE and SoSE, Keating, Padilla, and Adams argue that it is a mistake to simply assume that SoSE is an extrapolation of TSE or that

51

SoSE is so separate and distinct from TSE that once SoSE applies there is no value in

TSE. (Keating, Padilla, & Adams, 2008)

INCOSE states that, “the independence of constituent systems in an SoS is the source of a number of technical issues facing SE of SoS” adding, “the fact that a constituent system may continue to change independently of the SoS, along with the interdependencies between that constituent system and other constituent systems, adds to the complexity of the SoS.” (INCOSE SEH, 2015, p. 10)

Keating et al., state that, “Engineers of future complex systems face an emerging challenge of how to address problems associated with integration of multiple complex systems” with the realization that "Complex systems that have been conceived, developed, and deployed as stand-alone systems to address a singular problem can no longer be viewed as operating in isolation.” (Keating, et al., 2003) Keating et al., continued “SoSE represents an evolution of TSE, not a radical departure” adding “TSE approaches should be incorporated in SoSE efforts as appropriate”; however, “SoSE must be developed to address the shortcoming of TSE in addressing increasingly complex systems problems.” (Keating, et al., 2003)

INCOSE states that, “one of the objectives of the SE process is to minimize undesirable consequences” which “can be accomplished through the inclusion of and contributions from experts across relevant disciplines.” (INCOSE SEH, 2015, p. 12) This

‘call for experts’ aligns with the Cynefin definition of the ‘Complicated’ region where cause and effect requires special investigation or inclusion of expert knowledge. In systems where the relationship between cause and effect requires analysis, special study or the application of expert knowledge, SoSE is the preferred SE domain.

52

According to Keating et al., limitations of TSE include the inability to address high levels of ambiguity and uncertainty, the practice of placing system context in the background, and attempting to deliver complete solutions when system demands may be better met by partial system solutions deployed in an iterative fashion. (Keating, et al.,

2003) NASA’s SE handbook describes the relationship between complexity and requirements, “As system complexity grows, the potential for conflicts between requirements increases.” (NASA, 2009, p. 246)

Keating et al., described the limitations with TSE:

 Traditional systems engineering has not been developed to address high levels of

ambiguity and uncertainty in complex systems problems;

 Although traditional systems engineering does not completely ignore contextual

influences on system problem formulation, analysis, and resolution, it places

context in the background - Context, the circumstances and conditions within

which a complex systems problem is embedded, can constrain and overshadow

technical analysis in determining system solution success; and,

 Demand for deploying complex systems that may offer incomplete solutions. This

is contrary to traditional thinking rooted in a linear pattern of concept. (Keating, et

al., 2003)

This research now branches from EOS based on adoption of core classical sciences precepts to Engineering Unordered Systems (EUOS) that is broadly based on the theories from systems sciences and complexity theory.

53

2.4 Engineering Un-Ordered Systems (EUOS)

In the Handbook of Systems Engineering, Sheard wrote that systems may be grouped into two primary groupings: an ordered/traditional system or a complex/non-traditional system. (Sheard S. , Complex Adaptive Systems in Systems Engineering and

Management, 2009, p. 1284) The distinction of ‘un-ordered’ versus ‘ordered’ systems was derived from Kurtz and Snowden who wrote:

To avoid much repetition of the longer terms “directed order” and “emergent order,” we call emergent order “un-order.” Un-order is not the lack of order, but a different kind of order, one not often considered but just as legitimate in its own way. Here we deliberately use the prefix “un-” not in its standard sense as “opposite of” but in the less common sense of conveying a paradox, connoting two things that are different but in another sense the same. Thus, by our use of the term “un-order,” we challenge the assumption that any order not directed or designed is invalid or unimportant.

In the ordered domain we focus on efficiency because the nature of systems is such that they are amenable to reductionist approaches to problem solving; the whole is the sum of the parts, and we achieve optimization of the system by optimization of the parts. In the domain of un-order {discussed next} the whole is never the sum of the parts; every intervention is also a diagnostic, and every diagnostic an intervention; any act changes the nature of the system. As a result, we have to allow a degree of sub-optimal behavior of each of the components if the whole is to be optimized. (Kurtz & Snowden, 2003)

The definition of “system” and its subsequent use in this research is derivative from the writings of Ludwig von Bertalanffy – specifically his description of the issues associated with defining and describing a “system” and the need for a systems approach:

Compared to the analytical procedure of classical science with resolution into components elements and one-way or linear causality as basic category, the investigation of organized wholes of many variables requires new categories of interaction, transaction organization, teleology, etc., with many problems arising for epistemology, mathematical models, and techniques. (von Bertalanffy, The History and Status of General Systems Theory, 1972)

White wrote “complex systems engineering deals with challenging system environments where SE’s simplifying assumptions do not hold.” (White B. E., 2010)

Complexity is not absence of order – rather it is a different form of order, of un-order, or

54

emergent order. While ordered systems are designed, and order is constructed top-down, un-ordered systems are characterized by un-planned order emerging from agents and sub- systems to the system as a whole. (Dahlberg, 2015)

Review of literature identified exemplars where perceived need for new or enhanced

(i.e., non-traditional) SE methodology was accomplished by comparison or contrast to classical, transitional system engineering methods (TSM), generally documenting an assumption, limit, or constraint that prohibited TSM application in that specific example.

Norman and Kuras, describing a specific MITRE program, provide this example:

Using the current instantiation of the Air and Space Operations Center (AOC2), and the desired evolution of it, the AOC2 is shown to be best thought of as a complex system. Complex Systems are alive and constantly changing. They respond and interact with their environments – each causing impact on (and inspiring change in) the other. We make the case that a traditional systems engineering (TSE) approach does not scale to the AOC2; consequently, we don’t believe TSE scales to the “enterprise.” (Norman & Kuras, 2006, p. 1)

2.4.1 EUOS Standards of Practice

Table 2-6: EUOS Standards of Practice Source Topic TSM SoSM ESM CSM (Oliver, Kelliher, & Describes combining text Keegan, Jr., 1997) descriptions and Engineering Complex modeling to analyze and X Systems describe large or small complex systems (Bar-yam, 2003) Mathematical treatment of Dynamics of Complex structure, dynamics, Systems evolution, development and X quantitative complexity that apply to all complex systems (DeRosa, Grisogono, Enterprise systems; Ryan, & Norman, 2008) Classes of Problems for A Research Agenda for Which Complex Systems X X the Engineering of are Required Complex Systems

55

Source Topic TSM SoSM ESM CSM (Sheard & Mostashari, Systems of systems Principles of Complex compared to complex X X Systems for Systems systems with examples Engineering, 2009) (Sage & Rouse, Extensive discussion of all Handbook of Systems facets of engineering of X X X X Engineering and systems. Management, 2009) (Rebovich Jr & White, Extensive discussion of all 2011) faces of engineering Enterprise Systems enterprise systems X Engineering Presentation of Case Studies (Gorod, White, Ireland, Extensive history of SoSM, Gandhi, & Sauser, Case ESM, and CSM Studies in System of Presentation of Case Systems, Enterprise Studies X X X Systems, and Complex Systems Engineering, 2015) (Gilbert & Yearworth, How does complexity in the 2016) Complexity in a organization affect the Systems Engineering ability of Systems X Organization: An Engineers to meet delivery Empirical Case Study expectations in terms of cost and time?

2.4.2 Codifying ESM

According to Sitton and Reich “although there are technical papers describing such complex adaptive systems as well as some early papers contributing to the theory of systems engineering of enterprises, there is no generally accepted theory or set of best practices on this topic.” (Sitton & Reich, 2015) Rebovich and White provide the most comprehensive treatment for the engineering the enterprise in “Enterprise System

Engineering: Advanced in the Theory and Practice.” (Rebovich Jr & White, 2011)

INCOSE also introduces enterprise system engineering (ESE) in Para 8.5, as “Enterprise

SE is the application of SE principles, concepts, and methods to the planning, design, improvement, and operations of an enterprise” adding “enterprise SE is an emerging

56

discipline that focuses on frameworks, tools, and problem-solving approaches for dealing with the inherent complexities of the enterprise.” (INCOSE SEH, 2015, p. 176)

DeRosa described key elements of ESE as: development through adaption, strategic technical planning, enterprise governance, and ESE processes. (DeRosa, Introduction,

2011, p. 8) While a progeny of both TSE and SoSE, ESE is applicable when there is concurrent, substantial technical and mission changes.

DeRosa wrote “Recognizing the alignment between information-intensive networks in man-made enterprise and complex ecosystems in the natural world, the theoretical basis of ESE is a combination of complex systems where networks were generally open systems with porous boundaries and dynamic and unpredictable natural systems.”

(DeRosa, Forward, 2011, p. 10) DeRosa, Rebovich, and Swarz wrote “Ackoff has characterized an enterprise as a “purposeful system” composed of agents who choose both their goals and the means for accomplishing those goals” adding “ESE must account for the concerns, interests, and objectives of these agents.” (DeRosa, Rebovich Jr., &

Swarz, An Enterprise System Engineering Model, 2006) Kurtz and Snowden identified three important contextual characteristics that make it difficult to simulate human activity using agent-based computer models:

Humans are not limited to one identity – In a human complex system, an agent is anything that has identify, and we constantly flex our identities both individually and collectively. Individually we can be a parent, sibling, spouse, or child and will behave differently depending on the context. Accordingly, it is not always possible to know which unit of analysis we are working with.

Humans are not limited to acting in accordance with predetermined rules – We are able to impose structure on our interactions (or disrupt it) as a result of collective agreement or individual acts of free will. We are capable of shifting a system from complexity to order and maintaining it there in such a way that it becomes predictable. As a result, questions of intentionality play a large role in human patterns of complexity. It is difficult to simulate true free will and complex intentionality within a rule-based simulation.

57

Humans are not limited to acting on local patterns – People have a high capacity for awareness of large-scale patterns because of their ability to communicate abstract concepts through language, and more recently, because of the social and technological infrastructure that enables them to respond immediately to events half a world away. This means that to simulate human interaction, all scales of awareness must be considered simultaneously rather than choosing once circle of influence for each agent. (Kurtz & Snowden, 2003)

Elgass et al., wrote that “enterprises are not generally created from a rigorously planned framework to immediately meet a newly identified need; rather, they evolve”

(Elgass, et al., 2011) In contrast to TSE or SoSE, ESE is “exploratory and experimental rather than preplanned and execution-oriented.” (DeRosa, Introduction, 2011, p. 8)

MITRE wrote “When a system is bounded with relatively static, well-understood requirements, the classical methods of systems engineering are applicable and powerful” adding “At the other end of the spectrum, when systems are networked and each is individually reacting to technology and mission changes, the environment for any given system becomes essentially unpredictable.” (MITRE, 2014, p. 37) Rebovich wrote that,

“when networked systems are individually adapting to both technology and mission changes, then the environment for any given system or individual becomes essentially unpredictable” adding “the combination of large-scale interdependencies and unpredictability creates an environment that is fundamentally different from that of the system or system of systems (SoS).” (Rebovich Jr., Systems Thinking for the Enterprise,

2011, p. 33) The distinction is that SoSE focuses on technology changes while ESE incorporates environments with rapid mission changes as well.

Rebovich concluded that “the combination of large-scale interdependencies and unpredictability creates an environment that is fundamentally different from that of the system or system of systems (SoS)” adding “As a result, systems engineering success expands to encompass not only success of an individual system or SoS, but also the

58

network of constantly changing systems.” (Rebovich Jr., Systems Thinking for the

Enterprise, 2011, p. 33) Sitton and Reich wrote that the “enterprise operational requirements definition process is much more complex because it usually involves understanding the needs and operational tasks of various users that operate in different domains, uses different terms and expressions and specialize in different worlds of content.” (Sitton & Reich, 2015)

2.4.3 Codifying CSM

Sheard and Mostashari defined complex systems as, “systems that do not have a centralizing authority and are not designed from a known specification, but instead involve disparate stakeholders creating systems that are functional for other purposes and are only brought together in the complex system because the individual “agents” of the system see such cooperation as being beneficial for them.” (Sheard & Mostashari,

Principles of Complex Systems for Systems Engineering, 2009) Sheard and Mostashari presented the following system characteristics for assessing if a particular system was complex:

 Autonomous interacting parts (agents)

o Fuzzy boundaries

 Self-organization

o Energy, in and out

 Display emergent macro-level behaviour

o Nonlinearity

o Non-hierarchy and central authority

o Various scales

59

 Adapt to surrounding (environment)

o Become more complex with time; increasingly specialized

 Elements change in response to pressures from neighboring elements (Sheard &

Mostashari, Principles of Complex Systems for Systems Engineering, 2009)

Holland wrote “A complex adaptive system has no single governing equation, or rule, that controls the system” adding “Complex adaptive systems also exhibit an aggregate behavior that is not simply derived from the actions of the parts.” (Holland, 1992) In this research CAS is associated with EUOS and may include either ESM and/or CSM.

2.5 Cynefin sense-making Framework

This section provides describes sense-making, insight into the development of the

Cynefin framework and describes the domains of complexity contained within the framework. Van Beurden, Kia, Zask, Dietrich and Rose wrote “the Cynefin Framework, especially when used as a sense-making tool, can help practitioners understand the complexity of issues, identify appropriate strategies, and avoid the pitfalls of applying reductionist approaches to complex situations.” (Van Beurden, Kia, Zask, Dietrich, &

Rose, 2011)

2.5.1 Introduction to Sense-Making

Kurtz and Snowden wrote that “humans use patterns to order the world and make sense of things in complex situations” adding that “patterns are something we actively, not passively create.” (Kurtz & Snowden, 2003) Checkland wrote “given the complexity of any situation in human affairs, there will be a huge number of human activity system models which could be built” adding “the first choice to be made is of which ones are likely to be most relevant.” (Checkland, 2000) While sense-making has been described as

60

“placing stimuli into some kind of framework” generally referring to a “frame of reference” Louis proposed that sense-making be viewed as a “thinking process that uses retrospective accounts to explain surprises.” (Louis, 1980)

2.5.2 History of the Cynefin Framework

Two years before the Cynefin framework was introduced, Snowden published an article where he described a “landscape of management” as “new simplicity into acts of decision making and intervention design in organizations” titled “Multi-ontology sense making: a new simplicity in decision making.” (Snowden D. J., Multi-ontology sense making: a new simplicity in decision making, 2005) Snowden described multi-ontology sense making by contrasting the nature of systems (ontology) with the nature of way we know things (epistemology) as shown in Figure 2-6.

Figure 2-6: Snowden's Landscape of Management Provides Insight into his Initial Research Intertwining Ontology and Epistemology (Snowden D. J., Multi-ontology sense making: a new simplicity in decision making, 2005)

Research continued into what became the Cynefin framework in areas of knowledge management, cultural change and community dynamics by Kurtz and Snowden who conducted a program of disruptive action research using methods of narrative and

61

complexity theory to address critical business issues. They “began by questioning the basic assumptions that pervade the practice and to a lesser degree the theory of decision making and policy formulation in organizations” including assumptions of order, rational choice, and intentional capability to research what happens in decision theory when those assumptions are relaxed. (Kurtz & Snowden, 2003)

Kurtz and Snowden incorporated complexity science into their research of management science focusing on the concepts of ‘emergent order’ and ‘awareness of emergent order’. They wrote that “a considerable amount of research and some early practice is taking place using complex system principles, mainly using computing power to simulate natural phenomena through agent-based models” adding “we believe that such tools are valuable in certain contexts, but are of more limited applicability when it comes to managing people and knowledge.” (Kurtz & Snowden, 2003)

Figure 2-7 shows the 2003 version of the Cynefin framework with the fifth domain, disorder, included in the center but not labeled. Nilsen wrote “a framework usually denotes a structure, overview, outline, system or plan consisting of various descriptive categories, e.g. concepts, constructs or variables, and the relations between them that are presumed to account for a phenomenon” adding “frameworks do not provide explanations; they only describe empirical phenomena by fitting them in a set of categories.” (Nilsen, 2015) Kurtz and Snowden wrote potential users of the framework should “consider Cynefin a sense-making framework, which means that its value is not so much in logical arguments or empirical verifications as in its effect on the sense-making and decision-making capabilities of those who use it.” (Kurtz & Snowden, 2003)

62

Figure 2-7: 2003 Version of the Cynefin Framework by Kurtz and Snowden (Kurtz & Snowden, 2003)

Leveraging the research of Kurtz and Snowden applying sense-making on change and organizational learning, Snowden and Boone applied a complexity science perspective to assist business leaders determine the “prevailing operative context so they can make appropriate choices” where “each domain requires different actions.” (Snowden &

Boone, A Leader's Framework for Decision Making, 2007) Snowden wrote about developing “a whole new concept of research in services, modelled on medical science, in which concepts and practice co-evolved.” (Snowden D. , 2011) The 2007 version of the Cynefin framework describes a typology of operational environments, called Class of

System Problem (COSP) in this research, of increasing complexity structured to help leaders determine the prevailing operative context from the wide variety of situations in which they must lead and make decisions. Shown in Figure 2-8, the 2007 version of the

Cynefin framework helps leaders and decision makers sense which context to enable better decisions and “avoid the problems that arise when their preferred management

63

style causes them to make mistakes.” (Snowden & Boone, A Leader's Framework for

Decision Making, 2007) Snowden and Boone wrote:

The {Cynefin} framework sorts the issues facing leaders into five contexts defined by the nature of the relationship between cause and effect. Four of these – simple, complicated, complex, and chaotic – require leaders to diagnose situations and to act in contextually appropriate ways. The fifth – disorder – applies when it is unclear which of the other four contexts is predominant. Using the Cynefin framework can help executives sense which context they are in so that they can not only make better decisions but also avoid the problems that arise when their preferred management style causes them to make mistakes. (Snowden & Boone, A Leader's Framework for Decision Making, 2007)

Snowden and Boone described the Cynefin framework as a “leader’s framework for decision making” specifically to “allow executives to see things from new viewpoints, assimilate complex concepts, and address real-work problems and opportunities” in a

2007 article published in the Harvard Business Review. (Snowden & Boone, A Leader's

Framework for Decision Making, 2007) The Snowden and Boone version of the framework, shown in Figure 2-8, contains the five complexity regions and reinstates the order/un-order ontology first introduced in the Snowden’s leadership framework. (see

Figure 2-6, above)

Figure 2-8: 2007 Version of the Cynefin Framework by Snowden and Boone (Snowden & Boone, A Leader's Framework for Decision Making, 2007)

64

Close inspection of the Figure 2-7: 2003 Version of the Cynefin Framework and

Figure 2-8: 2007 Version of the Cynefin Framework shows that the nomenclature of the complexity domains was modified between 2005 and 2007.

Kurtz and Snowden wrote that the framework originated in knowledge management as a means of distinguishing between formal and informal communities interacting with both structured processes and uncertain conditions. (Kurtz & Snowden, 2003) In naming the framework Cynefin, Kurtz and Snowden wrote:

The name Cynefin is a Welsh word whose literal translation into English as habitat or place fails to do it justice. It is more properly understood as the place of our multiple affiliations, the sense that we all, individually and collectively, have many roots, cultural, religious, geographical, tribal, and so forth. We can never be fully aware of the nature of those affiliations, but they profoundly influence what we are. The name seeks to remind us that all human interactions are strongly influenced and frequently determined by the patterns of our multiple experiences, both through the direct influence of personal experience and through collective experience expressed as stories. (Kurtz & Snowden, 2003)

2.5.3 Cynefin Complexity Domains

Kurtz and Snowden described the Cynefin framework is a “phenomenological framework, meaning that what we care most about is how people perceive and make sense of situation in order to make decisions” adding “perception and sense-making are fundamentally different in order versus un-order.” (Kurtz & Snowden, 2003)

Kurtz and Snowden wrote “the central domain of disorder”, described in Section

2.5.3.5 below, “is critical to understanding conflict among decision makers looking at the same situation from different points of view.” (Kurtz & Snowden, 2003) Regarding the significant distinction between order and un-order they wrote:

Un-order is not the lack of order, but a different kind of order, one not often considered but just as legitimate in its own way. Here we deliberately use the prefix “un-” not in its standard sense as “opposite of” but in the less common sense of conveying a paradox, connoting two things that are different but in another sense the

65

same. Thus, by our use of the term “un-order,” we challenge the assumption that any order not directed or designed is invalid or unimportant. (Kurtz & Snowden, 2003)

Shown in Figure 2-9, Kurtz and Snowden wrote that in addition to the central domain of disorder, “the framework actually has two larger domains, each with two smaller domains inside.” (Kurtz & Snowden, 2003) This research asserts that within this larger

Ordered domain, shown on the right half of Figure 2-9, the four foundational assumptions underpinning systems engineering – Newtonian mechanics, analysis by decomposition, hierarchical management, and stable environment – apply thus allowing for determination or the ability to relate cause and effect.

Figure 2-9: Domains of Un-Ordered, Ordered and Disorder

Shown in Figure 2-10, Kurtz and Snowden continued “in the right-side domain of order, the most important boundary for sense-making is that between what we can use immediately (what is known) and what we need to spend time and energy finding out about (what is knowable). (Kurtz & Snowden, 2003) Brougham wrote “in an ordered system … behavior is highly predictable and the causality is either obvious from experience or can be determined with the right expertise.” (Brougham, 2015)

66

Figure 2-10: Ordered domains include the Simple domain and the Complicated domain

Shown in Figure 2-11, Kurtz and Snowden wrote “in the left-side domain of Un- order, distinctions of knowability are less important than distinctions of interaction; that is, distinctions between what we can pattern (what is complex) and what we need to stabilize in order for patterns to emerge (what is chaotic).” (Kurtz & Snowden, 2003)

Figure 2-11: Un-Ordered domains include the Complex domain and the Chaotic domain

In summary, Kurtz and Snowden wrote “the Cynefin framework is based on three ontological states (order, complexity, and chaos) and a variety of epistemological options in all three of those states” adding “interweaving of ontology and epistemology appears to be an essential aspect of human sense-making in practice.” (Kurtz & Snowden, 2003)

Each of the five domains is described below.

67

2.5.3.1 Simple Domain – Ordered & Known

Snowden and Boone wrote that this domain is characterized by “stability and clear cause-and-effect relationships that are easily discernible by everyone.” Originally calling this “the domain of best practice” they wrote that “the right answer is self-evident and undisputed” and “decisions are unquestioned because all parties share an understanding.”

They wrote that this context “assumes an ordered universe, where cause-and-effect relationships are perceptible, and right answers can be determined based on the facts.”

(Snowden & Boone, A Leader's Framework for Decision Making, 2007) Kurtz and

Snowden wrote that “cause and effect relationships are generally linear, empirical in nature, and not open to dispute” adding “repeatability allows for predictive models to be created, and the objectivity is such that any reasonable person would accept the constraints of best practice.” (Kurtz & Snowden, 2003) Gardner wrote “this is the domain of established fact, we believe we fully comprehend the relationship between pieces of information in this space and are confident we can make predictions based on our understanding.” (Gardner, 2013) Sheard wrote that known situations “have repeatable and predictable relationships between cause and effect” and “lend themselves to imposition of best practices and standard operating procedures.” (Sheard S. , Complex

Adaptive Systems in Systems Engineering and Management, 2009, p. 1314)

2.5.3.2 Complicated Domain – Ordered & Knowable

Snowden and Boone characterized this “the domain of experts” adding “though there is a clear relationship between cause and effect, not everyone can see it.” They wrote that leadership in this context “calls for investigating several options” often requiring “expert diagnosis” from “a team of experts” for investigating several viable options where there

68

is more than one right answer. (Snowden & Boone, A Leader's Framework for Decision

Making, 2007) Kurtz and Snowden wrote that in the absence of time or money to fully understand an environment, this domain requires decision makers to rely on expert opinion. (Kurtz & Snowden, 2003) Like the simple context, the complicated context

“assumes an ordered universe, where cause-and-effect relationships are perceptible, and right answers can be determined based on the facts.” (Snowden & Boone, A Leader's

Framework for Decision Making, 2007) Gardner wrote “in this space we see the formation of formal collaboration between multi disciplines teams consisting of subject matter experts.” (Gardner, 2013)

Validation of the assertion shown in Figure 2-10 was provided by several sources.

Sheard wrote that knowable situations “experience cause and effect that are separated over time and space, and thus should be analysed, usually by reductionist methods.”

(Sheard S. , Complex Adaptive Systems in Systems Engineering and Management, 2009, p. 1314) Van Beurden et al., wrote “in this domain, structured techniques based on reductionist science (e.g. longitudinal studies), are used to produce evidence.” (Van

Beurden, Kia, Zask, Dietrich, & Rose, 2011) Kurtz and Snowden wrote in the knowable domain “we focus on efficiency because the nature of systems is such that they are amenable to reductionist approaches to problem solving; the whole is the sum of the parts, and we achieve optimization of the system by optimization of the parts.” (Kurtz &

Snowden, 2003)

2.5.3.3 Complex Domain – Un-Ordered & Knowable

Comparing complex systems to complicated systems, White et al., wrote:

Complex is more than complicated, a notion that is on the lowest rung of a discrete or even continuous scale of increasing complexity. Many people including engineers

69

and systems engineers use complex and complicated interchangeably, or worse, use complex when they mean only complicated. Complex refers to a range of difficulty that is more, and often much more, than merely complicated. (White, Gandhi, Gorod, Ireland, & Sauser, 2013)

Kurtz and Snowden wrote that “humans use patterns to order the world and make sense of things in complex situations.” (Kurtz & Snowden, 2003) Gardner wrote that the complex domain “is the arena of ambiguity” describing the need for “looking for patterns and/or insights.” (Gardner, 2013) Van Beurden et al., wrote that in the un-ordered domain

“there are cause/effect relationships but their non-linear nature and the multiplicity of agents defy conventional analysis” adding “unpredictable patterns emerge from the mix to be understood only in retrospect.” (Van Beurden, Kia, Zask, Dietrich, & Rose, 2011)

Snowden and Boone amplified the limits of observation writing “in this domain we can understand why things happen only in retrospect.” (Snowden & Boone, A Leader's

Framework for Decision Making, 2007) Kurtz and Snowden wrote:

This is the domain of complexity theory, which studies how patterns emerge through the interaction of many agents. There are cause and effect relationships between the agents, but both the number of agents and the number of relationships defy categorization or analytic techniques. Emergent patterns can be perceived but not predicted; we call this phenomenon retrospective coherence. (Kurtz & Snowden, 2003)

There is a risk of assuming that a pattern, once recognized, may be used for prediction of future outcomes. Van Beurden et al., warn:

Attempts to turn emergent patterns into policy or procedure by top-down ‘installation’ that disregards their context will inevitable be confronted by new emergent patterns, each of which will also be understood only on reflection. So even expert opinion, based on historically stable patterns of meaning, will not sufficiently prepare us to recognize and act on new unexpected patterns. (Van Beurden, Kia, Zask, Dietrich, & Rose, 2011)

70

2.5.3.4 Chaotic Domain – Un-Ordered & Unknowable

Kurtz and Snowden wrote that in the simple, complicated, and complex domains

“there are visible relationships between cause and effect” adding that “in the chaotic domain there are no such perceivable relations and the system is turbulent” warning “we do not have the response time to investigate change.” (Kurtz & Snowden, 2003) Van

Beurden et al., wrote that the chaotic domain is “the turbulent, un-ordered” and “has no visible cause/effect relationships.” (Van Beurden, Kia, Zask, Dietrich, & Rose, 2011)

Gardner wrote that the chaotic domain is a “turbulent/disorganized space where all information is equal” describing each piece of information “is a fragment with no relationship to any other fragment.” (Gardner, 2013)

Snowden and Boone wrote “in a chaotic context, searching for right answers would be pointless: the relationships between cause and effect are impossible to determine because they shift constantly and no manageable patterns exist – only turbulence.”

(Snowden & Boone, A Leader's Framework for Decision Making, 2007)

2.5.3.5 Disorder Domain – Unknowing & Unconcerned

A synonym for unknowing is ignorance while a synonym for unconcerned is apathy.

Tumlinson provides a humorous example of two primary motivations for projects to be in the disorder domain:

Two kids are sitting in a high school auditorium, listening to the principal give the welcoming speech for the year. The principal says, “The two greatest dangers that students face are ignorance and apathy.”

One of the students turns to his friend and asks, “Dude, what's ignorance and apathy?”

The other student, bored and restless and wanting for the speech to end says, "I don't know and I don't care." (Tumlinson, 2017)

71

Snowden and Boone wrote that “the very nature of the fifth context – disorder – makes it particularly difficult to recognise when on it in it.” (Snowden & Boone, A

Leader's Framework for Decision Making, 2007) Ban Beurden et al., describe the disorder domain:

Here, we are undecided about which of the four other domains our situation represents, often because we are not conscious of alternatives. We may have a personalized, ‘one-size-fits-all’, default approach to management, decision-making, and group function that reflects our comfort zone rather than any rational choice. (Van Beurden, Kia, Zask, Dietrich, & Rose, 2011)

2.5.4 Cynefin Summary

Dahlberg wrote “The Cynefin Framework developed by David Snowden offers a useful approach to sense-making by dividing systems and processes into three distinct ontologies: (1) Order, (2) un-order, and (3) chaos” adding “Order and un-order co-exist in reality and are infinitely intertwined. Separation of the ontologies serves only as a sense- making tool at the phenomenological level, as assistance in determining the main characteristics of the situation you find yourself in, thus guiding you towards the most useful managerial and epistemological tools for the given ontology.” (Dahlberg, 2015)

The five domains of complexity in the Cynefin context-sensing framework, sorted by increasing levels of complexity, are in Table 2-7.

Table 2-7: Summary of Cynefin Framework Domain Names and Multi-Ontological Foundations 2007 2004 Cause & Effect Ontology Epistemology Domains Domains Relationship Disorder Disorder Unconcerned Unknowing Not considered Simple Known Ordered Known a priori cause and effect a priori cause and effect that requires expert Complicated Knowable Ordered Knowable knowledge or special investigation

72

2007 2004 Cause & Effect Ontology Epistemology Domains Domains Relationship a posteriori cause and Complex Complex Un-ordered Knowable effect Absence of cause and Chaotic Chaos Un-ordered Un-Knowable effect

2.6 Defining System & Model Complexity

Sillitto wrote “There has long been a divide between systems practitioners concerned with “hard” systems – often involving software and complex technologies – and “soft systems,” concerned with social systems and human understanding of systems and human response to complex situations” adding “Both sets of practice seek an underpinning theory or science of systems. However, the relationship of systems science to systems thinking and systems engineering is uncertain, or at least not widely agreed.” (Sillitto H. ,

2012)

The concept of complexity is so broad that there is no universally accepted definition or set of definitions. There is no SE industry accepted definition of complexity due in part to the existence of multiple theories regarding complexity. In the Handbook of Systems

Engineering and Management, Sheard wrote that, “complexity is the most commonly used name for the realm on the edge between order and chaos.” (Sheard S. , Complex

Adaptive Systems in Systems Engineering and Management, 2009) IEEE 15288 includes complex or complexity fifteen times but does not provide a description nor definition.

INCOSE SEH uses complex or complexity ninety-nine times without a definition.

INCOSE SEBoK defines complexity as “a measure of how difficult it is to understand how a system will behave or to predict the consequences of changing it” adding “it occurs when there is no simple relationship between what an individual element does and what the system as a whole will do.” (INCOSE SEBoK, 2016, p. 93)

73

INCOSE SEH wrote that “a system concept should be regarded as a shared ‘mental representation’ of the actual system” cautioning that “the SE must continually distinguish between systems in the real world and system representations” (e.g., models). (INCOSE

SEH, 2015, p. 5) Therefore, this research will define complexity for both actual systems and system representative models and describe the interrelationship between the two.

The section defines system complexity for systems and models of systems based on accepted SE theory using definitions provided in Section 2.2, Definition of System Used, frequently referring to earlier research and associated results described in 2.1, SE

Theoretical Foundations, and Section 2.5, Cynefin sense-making Framework. Where appropriate, the findings from previous literature research are reproduced to aid in associating the diverse theories and frameworks required to define system complexity. As a system is a combination of people, process, and technology within a defined environment with feedback, system complexity may be developed by considering each of the system elements in turn.

2.6.1 Definition for System Complexity

A system of interest (SOI) is one or more systems (things) that: has an environment; has a boundary; has elements and relationships; contains system elements that may be people, processes, and/or technologies; exhibits behaviors; and has one or more feedback mechanisms as shown in Figure 2-12.

74

Figure 2-12: Representation of System of Interest (SOI)

Recall that Ashby described two system states shown in Table 2-1 for systems with feedback.

Table 2 1: Ashby’s Set of Defined System States (from above) System Condition Meaning States The regulator contains requisite variety to Stable VO ≥ VD – VR control the outcome for a given disturbance The regulator does not contain requisite variety Unstable VO < VD – VR to control the outcome for a given disturbance

2.6.2 Complexity of Technology Elements

Recall from Section 2.1.3.6, Computational Complexity Theory, that Langton described the defined states for computers themselves or as logical universes, within which computers may be embedded shown in Table 2-2.

Table 2-2: Langton & Wolfram’s Defined States for CA that support Universal Computation (from above) Langton’s Wolfram’s Meaning CA States CA States Fixed point Class I Relax to a homogeneous fixed-point Relax to a heterogeneous fixed point or Periodic Class II to short-period behavior

75

Langton’s Wolfram’s Meaning CA States CA States Support complex interactions between localized Complex Class IV structures, often exhibiting long transients Chaotic Class III Relax to chaotic, random behavior

2.6.3 Complexity of Process Elements

IEEE 15288 “provides a process reference model characterized in terms of the process purpose and the process outcomes that result from the successful execution of the activity tasks” adding “It is understood that some users of this International Standard may desire to assess the implemented processes in accordance with ISO/IEC 15504.”

(ISO/IEC/IEEE 15288:2015(E), 2015, p. 90) International standardization of process assessment began in 1991 with development of ISO/IEC 15504, Information technology

– Process assessment. The initial version of 15504 was released in multiple parts in 1998 as an interim standard. Development of the standard incorporated results from empirical studies and was published in five parts from 2003 to 2006. (Rout, Walker, & Dorling,

2017) IEEE 15288 wrote “processes may be characterized by other attributes common to all processes” adding “ISO/IEC 15504-2 identifies common process attributes that characterize six levels of achievement within a measurement framework for process capability. The purpose and outcomes are a statement of the goals of the performance of each process. This statement of goals permits assessment of the effectiveness of the processes in ways other than simple conformity assessment.” (ISO/IEC/IEEE

15288:2015(E), 2015, pp. 15, 19)

Rout, Walker and Dorling noted that ISO/IEC 15504 focused on process capability and to some extent on organizational maturity writing “the standards framework needed to be extended to address other characteristics of processes in addition to process

76

capability” adding “a strategy was adopted to define requirements for the construction of measurement frameworks to address identified characteristics in a generic way, with a measurement framework for process capability included.” (Rout, Walker, & Dorling,

2017) ISO/IEC 15504 was partially replaced by ISO/IEC 33001:2015, Information technology – Process assessment – Concepts and Terminology, as of March 2015. Rout,

Walker and Dorling wrote “ISO/IEC 3300xx allows for process assessment in terms of a process quality characteristic (PQC)” adding “the new PQC concept has a strong analogy with product quality characteristics.” (Rout, Walker, & Dorling, 2017) While ISO/IEC

15504 defined six capability levels (0 to 5), each process attribute, which consists of one or more generic practices, was assessed on a four-point (N-P-L-F) rating scale shown in

Table 2-8.

Table 2-8: Summary of 15504 Process Assessment Ranking Scale PQC Meaning Defined as F Fully achieved >85% - 100% L Largely achieved >50% - 85% P Partially achieved >15% - 50% N Not achieved 0-15%

2.6.4 Complexity of Human Elements

Recall that in Section 2.1.3.7, Complexity in Organizational Theory, above, Towers described workforce competencies shown in Table 2-3.

Table 2 3: Towers Levels of Appropriate Competencies (from above) Practices Work Type Skill Level How to Achieve Best “Assembly Line” Proficiency Training Good Information Fluency Training & Experience Emergent Knowledge Literacy Deliberate Practice Novel Concept Mastery Deliberate Practice (10,000 hrs)

77

2.6.5 Complexity of Environments

Recall that in Section 2.5.4, Cynefin Summary, Snowden described a sense- awareness framework for people to perceive their environment in order to make sense of the situation and make decisions shown in Table 2-7.

Table 2 7: Summary of Cynefin Framework Domain Names and Multi-Ontological Foundations (from above) 2007 2004 Cause & Effect Ontology Epistemology Domains Domains Relationship Disorder Disorder Unconcerned Unknowing Not considered Simple Known Ordered Known a priori cause and effect a priori cause and effect that requires expert Complicated Knowable Ordered Knowable knowledge or special investigation a posteriori cause and Complex Complex Un-ordered Knowable effect Absence of cause and Chaotic Chaos Un-ordered Un-Knowable effect

INCOSE and IEEE’s definition of system included statements that highlighted that notation that the definition of a system is dependent on the frame of reference or perspective of individuals:

Systems may be configured with one or more of the following system elements: hardware, software, data, humans, processes (e.g., processes for providing service to users), procedures (e.g., operator instructions), facilities, materials and naturally occurring entities. As viewed by the user, they are thought of as products or services;

and,

The perception and definition of a particular system, its architecture, and its system elements depend on a stakeholder’s interests and responsibilities. One stakeholder’s system-of-interest can be viewed as a system element in another stakeholder’s system-of-interest. Furthermore, a system-of-interest can be viewed as being part of the environment for another stakeholder’s system-of-interest. (ISO/IEC/IEEE 15288:2015(E), 2015, p. 11)

78

2.6.6 Combining People, Process, Technology and Environment Complexity

Since Ashby, Langton, Wolfram, ISO/IEC 33001:2015 and Workforce Competencies were all designed to measure things that actually exist – none of them considered the case where the existence of the actual thing was unknown or there was no concern expressed for measurement or assessment. Only the Cynefin framework consciously considers

Disorder where the existence of the actual thing may be unknown or there is no concern to measure it. Table 2-9 includes all the actual system measurements developed above.

Analysis of Table 2-9 shows that, despite the broad set of constructs used to define complexity created by different experts to measure different things, there is a remarkable similarity that allows for alignment.

Table 2-9: Summary of Complexity Measurements for System States, Technology, Process, People/Workforce, and Environment System Langton’s Wolfram’s Process Workforce Cynefin States CA States CA States PQC Practices Domain NA NA NA NA NA Disorder Stable Fixed point Class I F Best Known Stable Periodic Class II L Good Knowable Unstable Complex Class IV P Emergent Complex Unstable Chaotic Class III N Novel Chaos States Technology Technology Process People Environment

While the complexity measurements may align, complexity is a categorical attribute where data observations fall into discrete, named value categories which do not allow for mathematical operations. While there is no summation of complexity using communicative, associative or distributive properties, analysis of Table 2-9 does demonstrate that the Cynefin sense-awareness framework is sufficiently robust to describe complexity of an environment sensed by the Program Management and/or

System Engineering and Management (PM/SEM) leadership regardless of the source of

79

that complexity in that environment when acting as a Regulator providing feedback as shown in Figure 2-13.

Figure 2-13: Graphical Summary of Complexity Measurements for System States, Technology, Process, People/Workforce, and Environment

INCOSE SEBoK wrote “complexity is a measure of how difficult it is to understand how a system will behave or to predict the consequences of changing it adding “It can be affected by objective attributes of a system such as by the number, types of, and diversity of system elements and relationships, or by the subjective perceptions of system observers due to their experience, knowledge, training, or other sociopolitical considerations.” (INCOSE SEBoK, 2016, p. 93) Sheard et al., wrote:

Complexity is a characteristic of more than just a technical system being developed. It is often created by the interaction of people, organizations, and the environment that are part of the complex system surrounding the technical system. Complexity results from the diversity, connectivity, interactivity, and adaptivity of a system and its

80

environment. Constant change makes it difficult to define stable goals for a project or system. Technical systems that worked well in the past to solve an environmental problem become obsolete quickly. Intricate networks of evolving cause-effect relationships lead to subtle bugs and surprising dynamics. Unintended consequences can overwhelm or even negate the intended consequences of actions. (Sheard, et al., A Complexity Primer for Systems Engineers, 2016)

Sage and Rouse wrote that many potential difficulties affecting trustworthy systems are problems associated with “organization and management of complexity” rather than

“direct technological concerns.” (Sage & Rouse, An Introduction to Systems Engineering and Systems Management, 2009, p. 13)

While Figure 2-13 provides an interesting conceptual representation on assessing the overall complexity of an actual system based on analysis of potential contributing factors for complexity from each of the system elements, exact knowledge of the actual system is only possible in deterministic, ordered systems. INCOSE wrote that, “the best way to understand a complicated system is to break in down into parts recursively until the parts are so simple that we understand them” warning that “this approach does not help us to understand a complex system, because the emergent properties that we really care about disappear when we examine the parts in isolation.” (INCOSE SEH, 2015, p. 9)

SEMDAM’s application of the Cynefin framework measures the relationship between the observer (e.g., PM/SEM) and the SOI. PM/SEM ability to predict system outcomes a priori or perceive system outcomes a posteriori is dependent on the skill and experience of PM/SEM and therefore subjective. Weinberg wrote:

Systems writers sometimes speak of ‘emergent’ properties of a system, properties that did not exist in the parts but that are found in the whole. Other writers attack this idea, saying that emergent properties are but another name for vital essence. Moreover, they can support their arguments with specific examples of ‘emergent’ properties that turned out to be perfectly predictable. Which is right?

Both are right, but both are in trouble because they speak in absolute terms, as if the ‘emergence’ were ‘stuff’ in the system, rather than a relationship between system and

81

observer. Properties ‘emerge’ for a particular observer when he could not or did not predict their appearance. We can always find cases in which a property will be ‘emergent’ to one observer and ‘predictable’ to another.

Demonstrations that a property could have been predicted have nothing to do with ‘emergence.’ By recognizing emergence as a relationship between the observer and what he observes, we understand that properties will ‘emerge’ when we put together more and more complex systems. (Weinberg, 1975, p. 60)

2.7 Modelling Complexity

Sheard et al., wrote “all science involves abstraction of the complexity of the world into approaches and models that use simplifying assumptions” adding “The best engineering methods take advantage of the simplicity in the models without diverging so far from reality that behavior can no longer be predicted and controlled.” (Sheard, et al.,

A Complexity Primer for Systems Engineers, 2016) Senge wrote “from a very early age, we are taught to break apart problems, to fragment the world. This apparently makes complex tasks and subjects more manageable” but also warns that, “we pay a hidden, enormous price. (Senge, 2000) Senge wrote:

We can no longer see the consequents of our actions; we lose our intrinsic sense of connection to the larger whole. When we then try to “see the big picture,” we try to reassemble the fragments in our minds, to list and organize all the pieces. But, as physicist David Bohm says, the task is futile – similar to trying to reassemble the fragments of a broken mirror to see a true reflection. Thus, after a while we give up trying to see the whole together. (Senge, 2000)

INCOSE wrote of the black box/white box terminology “black box represents an external view of the system (attributes)” adding “white box represents an internal view of the system (attributes and structure of the elements).” (INCOSE SEH, 2015, p. 262) Dietz et al., wrote:

A constructional model (or white-box model) of an enterprise, can always be validated from the actual construction. Contrarily, a functional model (or black-box model) is by its very nature subjective, because function is not a system property but a relationship between the system and a stakeholder. Consequently, every system has

82

(at any moment) one construction, but it may have at least as many functions as there are stakeholders. (Dietz, et al., 2013)

In describing the impact of increased complexity on modeling modern systems,

Piaszczyk wrote “Complexity of modern systems that integrate humans, software, and hardware to address the frequently conflicting needs and constraints makes requirements engineering increasingly difficult to manage” adding “Dealing with this complexity requires a complete revision of approaches and methods of systems engineering to achieve usable, reliable, and cost-effective solutions to the problems that are becoming more and more difficult.” (Piaszczyk, 2011)

2.7.1 Definition of Model Complexity

This research proposes use of Class of System Problem (COSP) to measure Model complexity. Verification of completeness and correctness of the COSP included is provided by inspection of Figure 3-1: The INCOSE Complex Systems Working Group

Use of the Cynefin Framework to Identify Classes of Systems Problems.

The presence of overloaded terms including (e.g., ‘complex’, ‘known’, or ‘simple’) complicate the task of describing SEMDAM. The constructs to describe complexity by

INCOSE’s Complex Systems Working Group and Cynefin’s complexity domains and candidate explanations of COSP are interrelated and are frequently used in this dissertation. As INCOSE COSP and Cynefin domains are synonymous and Cynefin domains are more concise and describe disorder, this research proposes use of the candidate explanations for COSP (italicized) based on the Cynefin complexity domain names (not italicized) and INCOSE COSP in Table 2-10.

83

Table 2-10: SEMDAM Alignment Between INCOSE’s Complexity Working Group, Cynefin’s Typology of Operating Environments and SEMDAM Candidate Explanations for COSP INCOSE Complexity Cynefin Typology of COSP Model Working Group Domains Operating Environments Complexity Not understood or Disorder Disorder expressed Simple and complicated Known or Simple Known systems Massively-complicated Knowable or Complicated Knowable systems Complex systems Complex Complex Chaotic systems Chaos or Chaotic Chaos

This research will standardize on the domain names used in the 2004 Kurtz and

Snowden presentation and by the INCOSE Complex Systems Working Group included in the third column of Table 2-10. Users of SEMDAM need to be aware that SEMDAM does not measure actual system complexity. It measures the PM/SEM understanding of the COSP as demonstrated by regular identify of predictions or identification of trends.

2.7.2 Identification of SEMs

Classification of SEMs involves: researching SE literature normalizing the many divergent representations and definitions of systems and systems engineering in the technical literature; synthesizing the confusing and sometimes conflicting terms used to describe systems into generally acceptable recognizable groups; identifying significant characteristics of each group; and, identifying constant differences. Smith wrote:

There are two basic approaches to classification. The first is typology, which conceptually separates a given set of items multidimensionally… The key characteristic of a typology is that its dimensions represent concepts rather than empirical cases. The dimensions are based on the notion of an ideal type, a mental construct that deliberately accentuates certain characteristics and not necessarily something that is found in empirical reality. (Weber, 1949) As such, typologies create useful heuristics and provide a systematic basis for comparison. Their central drawbacks are categories that are neither exhaustive nor mutually exclusive, are often

84

based on arbitrary or ad hoc criteria, are descriptive rather than explanatory or predictive, and are frequently subject to the problem of reification. (Bailey, 1994)

A second approach to classification is taxonomy. Taxonomies differ from typologies in that they classify items on the basis of empirically observable and measurable characteristics. (Bailey, 1994, p. 6) Although associated more with the biological than the social sciences (Sokal & Sneath, 1964), taxonomic methods–essentially a family of methods generically referred to as cluster analysis–are usefully employed in numerous disciplines that face the need for classification schemes (Lorr, 1983; Mezzich & Solomon, 1980). (Smith K. B., 2002)

Use of a sense-making framework is a form of typological analysis defined by SAGE

Research Methods as:

Typological analysis is a strategy for descriptive qualitative (or quantitative) data analysis whose goal is the development of a set of related but distinct categories within a phenomenon that discriminate across the phenomenon. Typologies are characterized by categorization, but not by hierarchical arrangement; the categories in a typology are related to one another, not subsidiary to one another. (SAGE Research Methods, 2008)

This research incorporated four SEMs which are listed below and described in detail in Chapter 2, Literature Review. SEMs include:

 Traditional Systems Method (TSM) – described in Section 2.3.3, Codifying TSM;

 System-of-Systems Method (SoSM) – described in Section 2.3.4, Codifying

SoSM;

 Enterprise Systems Method (ESM) – described in Section 2.4.2, Codifying ESM;

and,

 Complex Systems Method (CSM) – described in Section 2.4.3, Codifying CSM.

Figure 2-14, not drawn to scale, includes SEMs identified in literature research and applied in SEMDAM: NOSEM, TSM, SoSM, ESM, and CSM, and their proposed typology ordered by the ability of the system to address COSP of increasing complexity.

85

Figure 2-14: Proposed Typology of SEMs in Relation to Complexity

The approach of considering one or more SEMs based on increasing complexity is not new. Sheard et al., wrote that as a system’s complexity increases, “the risk associated with using simpler methods and simplifying assumptions also increases, and more advanced techniques may be needed” adding “tools and techniques apply differently to systems on a spectrum of increasing complexity.” (Sheard, et al., A Complexity Primer for Systems Engineers, 2016)

2.7.3 Alignment of COSP and SEMs

To define system complexity that correctly identifies the class of system problem requires definitions for system and complexity suitable across engineering of ordered systems and un-ordered systems including each of the SEMs presented earlier. Applying the Cynefin sense-awareness framework with complexity domains associated with observation of cause and effect, SEMDAM provides a recommendation for a COSP appropriate SEM based on measuring statistically significant association for a priori

86

prediction of system outcomes or a posteriori perception of system response based on the following alignment:

 No evidence of considering a SEM aligns with Disorder domain;

 Known – where the relationship between cause and effect is obvious to all –

aligns with TSM (i.e., assumptions of Newtonian mechanics, decomposition,

hierarchical management, and stable environment apply);

 Knowable – where the relationship between cause and effect is knowable with

expert knowledge or special investigation – aligns with SoSM (e.g., see

INCOSE’s approach to reduce complexity in SE for SoSs, “can be accomplished

through the inclusion of contributions from experts across relevant disciplines

coordinated by the SE”); (INCOSE SEH, 2015, p. 12)

 Complex – where the relationship between cause and effect can only be perceived

in retrospect – aligns with ESM (i.e., emergence, differentiation, selectionism,

adaptation, self-organization, homoeostasis, and loose coupling apply; Newtonian

mechanics, decomposition, hierarchical management, and stable environment do

not apply); and,

 Chaos – where there is no relationship between cause and effect – aligns with

CSM (egocentric agents, autonomy, multi-scalarity, anisotropy, edge of chaos;

autopoiesis, homoeostasis, Newtonian mechanics, decomposition, hierarchical

management, and stable environment do not apply).

Recalling the assertion expressed by DeRosa et al., that “there are classes of problems that require complex systems to deal with them” (DeRosa, Grisogono, Ryan, & Norman,

2008), Table 2-11 presents the proposed alignment of COSP and complexity appropriate

87

SEMs that addresses Ashby’s requisite variety theory as applied to engineering of systems of increasing complexity. Using inferred COSP as input, SEMDAM looks up the associated complexity appropriate SEM from Table 2-11.

Table 2-11: Proposed Alignment Between Inferred COSP and Complexity Appropriate SEM Recommended Inferred COSP SEM Disorder NOSEM Known TSM Knowable SoSM Complex ESM Chaos CSM

Recall that Snowden’s original research into sense-making started with a multi- ontological perspective created by contrasting the nature of systems (ontology) with the nature of way we know things (epistemology). While the names used to describe the five domains of complexity varied slightly, the fundamental concept that our understanding of complexity is a combination of the nature of systems and nature of the way we know about the nature of systems did not change. This duality is at the heart of the debate between an objective definition of complexity (i.e., something that is an attribute of a system) and a subjective definition of complexity (i.e., what is complex for one person may not be for another). Gorod, Sauser, and Boardman wrote:

While the scope of engineering and managing systems has changed dramatically and become a significant challenge in our ability to achieve success, fundamental to understanding the context of any system is the necessity to distinguish between the system type and its strategic intent, as well as its systems engineering and managerial problems. Therefore, no single approach can solve these emerging problems, and thus no one strategy is best for all projects. (Gorod, Sauser, & Broadman, System-of- Systems Engineering Management: A Review of Modern History and a Path Forward, 2008)

88

2.8 Literature Review Summary

This section provided an in-depth treatment of underlying theories and then provided a definition of system used for this research. Based on classical sciences, system sciences and complexity, this research identified a coherent strategy to leverage existing and emerging SEMs including mapping the inherent complexity to the ability of a SEM to address complexity based on the Cynefin sense-awareness framework.

Next, Chapter 3, SEMDAM Methodology, will describe the specific methodology for utilization of SEMDAM followed by an empirical case study demonstrating SEMDAM applied to a significant problem.

89

3 SEMDAM Methodology

It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience. (Einstein, 1934)

The chapter introduces SEMDAM and describes how and why SEMDAM works as a basis for subsequent sections: Chapter 4, SEMDAM Applied to an Empirical Case Study;

Chapter 5, Synthesis & Discussion; and, Chapter 6, Conclusions. Estefan defined methodology as “a collection of related processes, methods, and tools.” (Estefan, 2007)

Piaszczyk wrote “a methodology is essentially a ‘recipe’ and can be thought of as the application of related processes, methods, and tools to a class of problems that all have something in common.” (Piaszczyk, 2011) McGregor and Murnane wrote that “scholars often use the terms methodology and method interchangeably” adding “{methodology} refers to philosophy and {method} refers to technical procedures applied to conduct research.” (McGregor & Murnane, 2010) McGregor and Murnane provide the definition of methodology used to organize and present this research:

The word methodology comprises two nouns: method and ology, which means a branch of knowledge; hence, methodology is a branch of knowledge that deals with the general principles or axioms of the generation of new knowledge. It refers to the rationale and the philosophical assumptions that underlie any natural, social or human science study, whether articulated or not. Simply put, methodology refers to how each of logic, reality, values, and what counts as knowledge inform research. (McGregor & Murnane, 2010)

Section 3.1, Research Design, describes the methodology used for the design of this research including formulation, alternative approaches considered, and validation.

Section 3.2, SEMDAM Introduction, presents SEMDAM including the processes that comprise the model. Section 3.3, Attribute Selection, describes attribute types and data qualification methods. Section 3.4, Statistical Model Selection, describes statistical methods including linear regression, analysis of variance, and logistic regression.

90

3.1 Research Design

Using the definitions of research methods by Salkind, this research design is a combination of nonexperimental research including description, correlational, and qualitative methods. (Salkind, 2012, p. 11) Identifying and describing Class of System

Problems (COSP) and the current state of System Engineering Methods (SEM)s at the time of this study is based on the descriptive research method performed by review of literature. Looking for statistically significant relationships between variables to measure the accuracy of a prediction or perception is based on the correlational research method.

Salkind wrote “correlational research describes the linear relationship between two or more variables without any hint of attributing the effect of one variable on another.”

(Salkind, 2012, p. 203) Using the Cynefin framework to sense complexity is a typological analysis based on the qualitative research method. Salkind wrote that “qualitative research is social or behavioral science research that explores the processes that underlie human behavior” adding that “qualitative research is not just an alternative to quantitative research; it is a different approach that allows you to ask and answer different types of questions.” (Salkind, 2012, p. 213)

Brown wrote that good research design, like good systems engineering, requires:

1. an understanding of the context in which the research is being done and of the research requirements and aims;

2. an understanding of the concepts and theoretical principles upon which the research design will be based;

3. the consideration of and selection between alternative design options; and,

4. an early consideration of how the research will be validated, such that planning for validation permeates the research design at all stages and does not become an afterthought. (Brown, 2009)

91

The research context is presented in Section 3.1.1, Research Context. The understanding of the concepts and theoretical principles is described in Section 3.1.2,

Methodological Formulation. Consideration and discussion of alternate approaches is presented in Section 3.1.3, Alternatives Considered. Section 3.1.4, Plans for Validation, addresses considerations for validation of this research.

3.1.1 Research Context

This research applies the Cynefin sense-awareness framework to provide a diagnostic assessment of the class of system problem (COSP) by analysis of evidence of a priori prediction and/or a posteriori perception of cause and effect as a basis for recommending a SEM appropriate for the COSP. This research is not the first to consider using the

Cynefin framework to sense the complexity of an environment.

Sheard proposed using the Cynefin framework to “determine what your situation is”

{where situation is analogous to COSP} as a recommended principle for management of complex adaptive systems engineering efforts writing that known situations and knowable situations “are the only toolkits that many managers have. Fortunately, Kurtz and

Snowden produced two more” listing complex situations and chaotic situations. (Sheard

S. , Complex Adaptive Systems in Systems Engineering and Management, 2009, p. 1314)

Gardner used the Cynefin framework to “provide knowledge workers and information architects with a framework” as a basis for “development of a suite of Enterprise 2.0” collaboration tools. (Gardner, 2013) French proposed use of the Cynefin framework to identify situation and issues for categorizing decision support options because it benefits analysts by “helping identify what methodologies might be suitable for the problem.”

(French, 2012)

92

Van Beurden et al., applied the Cynefin framework to “identify approaches appropriate to the level of complexity” applying it to health promotion for “planning or reviewing an entire portfolio of projects to enable emergent {health} practices … while still rolling out standardized, evidence-based strategies.” They observed that using “the

{Cynefin} framework helps those addressing complex issues to communicate the value and meaning of their work within a system that largely privileges a reductionist approach.” (Van Beurden, Kia, Zask, Dietrich, & Rose, 2011)

Recently, the INCOSE Complex Systems Working Group “working at the intersection of complex systems sciences and SE” applied the Cynefin Framework to understand the “classes of systems problems” (COSP) to “facilitate the identification of tools and techniques to apply in the engineering of complex systems” as shown in Figure

3-1. (McEver, 2016)

Figure 3-1: The INCOSE Complex Systems Working Group Use of the Cynefin Framework to Identify Classes of Systems Problems (McEver, 2016) The INCOSE representation cited the source as: Kurtz and Snowden, “The new dynamics of strategy: Sensemaking in complex and complicated world,” IBM Systems Journal 42 (3); 2003.

93

Members of the INCOSE Complex Systems Working Group include: Sarah Sheard,

Eric Honour, Jimmie McEver, Dorothy McKenney, Alex Ryan, Stephen Cook, Duane

Hybertson, Joseph Krupa, Paul Ondrus, Robert Scheurer, Janet Singer, Joshua Sparber, and Brian White. (McEver, 2016)

3.1.2 Methodological Formulation

This section presents the basic theory on how and why SEMDAM works and how application of SEMDAM will result in a recommendation of a complexity appropriate

SEM. Reasoning is the process of using existing knowledge to draw conclusions, make predictions, or construct explanations. Three main methods of reasoning are: deductive reasoning, inductive reasoning, and abductive reasoning. SEMDAM relies on abductive reasoning.

3.1.2.1 Abductive Reasoning

Abductive reasoning is a form of logical inference that starts with an observation then attempts to find the simplest and most likely explanation given a set of evidence. The

New World Encyclopaedia wrote:

Abduction, or inference to the best explanation, is a method of reasoning in which one chooses the hypothesis that would, if true, best explain the relevant evidence. Abductive reasoning starts from a set of accepted facts and infers most likely, or best, explanations. Aristotle discussed abductive reasoning (apagoge, Greek) in his Prior Analytics. The concept of abduction is applied beyond logic to the social sciences and the development of artificial intelligence.

Abduction means determining the precondition. It is using the conclusion and the rule to assume that the precondition could explain the conclusion. Example: "When it rains, the grass gets wet. The grass is wet, it must have rained." Diagnosticians and detectives are commonly associated with this style of reasoning.

It allows inferring a as an explanation of b. Because of this, abduction allows the precondition of “a entails b” to be inferred from the consequence b. Deduction and abduction thus differ in the direction in which a rule like “a entails b” is used for inference.

94

The philosopher Charles Peirce introduced abduction into modern logic. In his works before 1900, he mostly uses the term to mean the use of a known rule to explain an observation; for example, “if it rains the grass is wet,” is a known rule used to explain that the grass is wet. In other words, it would be more technically correct to say, "If the grass is wet, the most probable explanation is that it recently rained."

He later used the term to mean creating new rules to explain new observations, emphasizing that abduction is the only logical process that actually creates anything new. Namely, he described the process of science as a combination of abduction, deduction, and induction, stressing that new knowledge is only created by abduction. (New World Encyclopedia, 2017)

Bradford provides on overview of abductive reasoning that highlights its’ applicability for this research writing:

Abductive reasoning usually starts with an incomplete set of observations and proceeds to the likeliest possible explanation for the group of observations. It is based on making and testing hypotheses using the best information available. It often entails making an educated guess after observing a phenomenon for which there is no clear explanation. (Bradford, 2017)

The Stanford Encyclopaedia of Philosophy provides the following example to illustrate the use of abduction in science:

At the beginning of the nineteenth century, it was discovered that the orbit of Uranus, one of the seven planets known at the time, departed from the orbit as predicted on the basis of Isaac Newton’s theory of universal gravitation and the auxiliary assumption that there were no further planets in the solar system. One possible explanation was, of course, that Newton’s theory was false. Given its great empirical successes for (then) more than two centuries, that did not appear to be a very good explanation. Two astronomers, John Couch Adams and Urbain Leverrier, instead suggested (independently of each other but almost simultaneously) that there was an eighth, as yet undiscovered planet in the solar system; that, they thought, provided the best explanation of Uranus’ deviating orbit. Not much later, this planet, which is now known as “Neptune,” was discovered. (Stanford Encyclopedia of Philosophy, 2017)

Astronomers would have been more surprised to learn that Newton was wrong than to learn that there is another planet in our solar system. This is called the Surprise Principle.

Sober described the implausibility of occurrence as the Surprise Principle stating that “the

Surprise Principle describes what it takes for an observation to strongly favor one hypothesis over another” as follows:

95

The Surprise Principle: An observation O strongly supports H1 over H2 if both the following conditions are satisfied, but not otherwise: (1) if H1 were true, O is to be expected; and (2) if H2 were true, O would have been unexpected.

The Surprise Principle involves two requirements. It would be more descriptive, though more verbose, to call the idea the No Surprise/Surprise Principle. The Surprise Principle describes when an observation O strongly favors one hypothesis (H1) over another (H2). There are two requirements:

(1) If H1 were true, you would expect O to be true.

(2) If H2 were true, you would expect O to be false.

That is (1) if H1 were true, O would be unsurprising; (2) if H2 were true, O would be surprising. The question to focus on is not whether the hypotheses (H1 or H2) would be surprising. The Surprise Principle has nothing to do with this. To apply the Surprise Principle, you must get clearly in mind what the hypotheses are and what the observation is. The Surprise Principle gives advice on what a hypothesis must do if it is to be strongly supported by the predictions is makes. First, the hypotheses shouldn’t make false predictions. Second, among the true predictions the hypothesis makes, there should be predictions we would expect not to come true if the hypothesis were false. (Sober, 2012, p. 30)

According to the Stanford Encyclopaedia of Philosophy, a formulation of abduction follows the format:

Given evidence E and candidate explanations H1, …, Hn of E, ABDX (2) infer the truth of that Hi which best explains E.

(Stanford Encyclopedia of Philosophy, 2017)

The use of abductive logic induces several requirements including defining acceptable evidence ‘given evidence E’ and appropriate ‘candidate explanations H1, …,

Hn of E’ which are both described below.

3.1.2.2 Defining candidate explanations H1, …, Hn

This research applied the Cynefin sense-awareness framework as a basis for defining the “candidate explanation” used to describe COSP. Louis wrote:

Sense making can be viewed as a recurring cycle comprised of a sequence of events occurring over time. The cycle begins as individuals form unconscious or conscious

96

anticipations and assumptions, which serve as predictions about future events. Subsequently, individuals experience events that may be discrepant from predictions. Discrepant events, or surprises, trigger a need for explanation, or post-diction, and correspondingly, for a process through which interpretations of discrepancies are developed. Interpretation, or meaning, is attributed to surprises. Based on the attributed meanings, any necessary behavioral responses to the immediate situation are selected. Based on attributed meanings, any understandings of actors, actions, and settings are updated and predictions about future experiences in the setting are revised. The updated anticipations and revised assumptions are analogous to alterations in cognitive scripts. (Louis, 1980)

The Cynefin framework contains five complexity domains that categorize complexity by the nature of the relationship between cause and effect. Specifically, SEMDAM uses the Cynefin framework to sense COSP by analysis of evidence of either an a priori prediction and/or a posteriori perception of cause and effect.

Per Section 2.5, Cynefin sense-making Framework, each Cynefin complexity domain has a distinctive cause and effect relationship. Ordering and evaluating the candidate explanations from Disorder to Chaos allows use of the Surprise Principle for a given set of evidence. For example, if COSP is assumed Known, PM/SEM should be more surprised that a previous prediction of system outcomes is found to be incorrect than if the prediction is correct. The candidate explanations H1, …, Hn align with the COSP defined in Table 2-11, Proposed Alignment Between Inferred COSP and Complexity

Appropriate SEM, of Disorder, Known, Knowable, Complex and Chaos.

3.1.2.3 Inferring COSP ‘Given Evidence E’

SEMDAM infers COSP based on the nature of evidence and actual evidence described in Table 3-1 by measurement of user-defined attributes selected during Step 5 in Section 3.2.5. Specific details on selecting attributes is described in Section 3.3.

Details on selecting and applying statistical methods is described in Section 3.4.

97

Table 3-1: Definition of Evidence E for SEMDAM Class of System Nature of Evidence Evidence E No evidence of system Not understood or engineering and Not conscious of alternatives expressed management (SE&M) Simple and Statistically significant predictive a priori cause and effect complicated systems analytics of selected attributes a priori cause and effect Statistically significant predictive Massively- that requires expert analytics of selected attributes complicated systems knowledge or special developed by experts or resulting investigation from special studies a posteriori cause and Statistically significant descriptive Complex systems effect analytics of selected attributes Neither predictive nor descriptive Absence of cause and Chaotic systems analytics reach the level of effect statistical significance

The relationship between SEMDAM COSP and the Nature of Evidence is verified using the description method of Section 2.5, Cynefin sense-making Framework.

3.1.3 Alternatives Considered

Table 3-2 contsins some of the alternatives considered in developing SEMDAM.

Table 3-2: Summary of Alternatives Considered SEMDAM SEMDAM Alternative Describe in Component Selection Considered Section MITRE Enterprise Complexity Cynefin Systems Engineering 3.1.3.1 Framework ESE Profiler MITRE Systems Complexity Cynefin Engineering Activities 3.1.3.2 Framework (SEA) Profiler Types of System Complexity Cynefin Complexity 3.1.3.3 Framework Framework Abductive Reasoning model Deductive Reasoning 3.1.3.4 reasoning Abductive Reasoning model Inductive Reasoning 3.1.3.5 reasoning

98

3.1.3.1 MITRE Enterprise Systems Engineering ESE Profiler

Stevens wrote that “engineering of these large scales, complex systems must take into account the specific characteristics of the system and the context in which it is being engineered, developed, and acquired and in which it will operate” adding “The first step is to be able to characterize the systems in its context.” Stevens wrote “This complexity framework defines the problem context along multiple dimensions taking into account both the nature of the decision makers and the nature of the system itself.” The ESE

Profiler is based on a COSP of TSM, SoSM, and ESM which are identified in the polar diagram output, shown in Figure 3-2, as the three concentric rings from the center reflecting “increasing levels of complexity and uncertainty.” (Stevens, 2008)

The aspects of ESE Profiler that are similar with and provide validation of SEMDAM are: (1) agreement that “system-based problem-solving methodologies should be selected based on the context of the problem” (i.e., assess COSP and align with SEM); (2) use of a

COSP that is in line and in order with SEMDAM (i.e., TSM is less complex that SoSM which is less complex that ESM); (3) acknowledgement that the assessment measures complexity based on both system and observer; and, (4) agreement that a complexity assessment provides value as either a one-time self-assessment or a recurring situational modeling tool. ESE Profiler was not selected as the complexity framework due to the inability to verify, validate or in any way independently measure the statistical significance of the responses to strategic context, implementation context, stakeholder context, and systems context.

99

Figure 3-2: MITRE's Enterprise Systems Engineering Profiler is organized into four quadrants and three rings (Stevens, 2008)

3.1.3.2 MITRE Systems Engineering Activities (SEA) Profiler

White wrote that the SEA Profiler was constructed for application in complex and enterprise systems engineering endeavors and is “primarily intended to be used by a systems engineer, program manager, or project leader, for characterizing the systems engineering (SE) being done on their program/project within a given environment and timeframe.” (White B. E., Systems Engineering Activities (SEA) Profiler, 2010) While similar in structure to the Enterprise Systems Engineering (ESE) Profiler (Section 3.1.3.1, above), White describes some of the distinctions when he wrote:

The SEA Profiler can be used in conjunction with the Enterprise Systems Engineering (ESE) Profiler (Stevens 2008) which is primarily oriented toward characterizing the nature and degree of difficulty of the environment surrounding the program/project effort. In short, the ESE Profiler helps you characterize your situation, and the SEA

100

Profiler helps you characterize what you're doing about it. (White B. E., Systems Engineering Activities (SEA) Profiler, 2010)

The SEA Profiler involves assessment of 9 groups of typical SE activities – define the system problem, analyze alternatives, utilize a guiding architecture, consider technical approaches, pursue solutions, manage contingencies, develop implementations, integrate operational capabilities, and learn by evaluation effectiveness – by selecting one of five

COSP levels from the following choices:

A. Left end of slider – activities are intended to characterize and be most closely associated with the “traditional” practice of conventional or prescriptive SE utilizing the best-known techniques;

B. Left Intermediate Interval –activities mostly with a “directed” system of systems (SoS);

C. Center Intermediate Interval – activities are intended to characterize and be most closely associated with an “acknowledged” SoS;

D. Right Intermediate Interval – activities associated mostly with a “collaborative” SoS; and

E. Right End of Slider – activities associated mostly with a “virtual” SoS. (White B. E., Systems Engineering Activities (SEA) Profiler, 2010)

The result of using the SEA Profiler is a situational awareness assessment dashboard shown in Figure 3-3.

The aspects of SEA Profiler that are similar with and provide validation of SEMDAM are: (1) agreement that the intended users are systems engineers, program managers or project leaders (2) use of a COSP that is in line and in order with SEMDAM (e.g., TSM is less complex that SoSM); and (3) agreement on the need to assess the COSP and align with the SEM. SEA Profiler was not selected as the complexity framework due to the inability to verify, validate or in any way independently evaluate or measure the

101

statistical significance of the responses to the 9 typical SE activities and the limitation that SEA Profiler only considers TSM and SoSM while not considering ESM or CSM.

Figure 3-3: Example Situational Assessment from SEA Profiler (White B. E., Systems Engineering Activities (SEA) Profiler, 2010)

3.1.3.3 Types of System Complexity Framework

In her dissertation (Sheard S. A., Assessing the Impact of Complexity Attributes on

System Development Project Outcomes, 2012) Sheard wrote that her research was based on a Types of System Complexity framework previously introduced by Sheard and refined by Sheard and Mostashari. Sheard’s initial resilience framework consisted of five aspects that are “often part of resilience definitions in literature: time periods, system, event, required action, and preserved qualities; and five prescriptive principles to improve resilience: system, organizational, economic, ecological, political, and socio-ecological.”

102

(Sheard S. , A Framework for System Resilience Discussions, 2008) Sheard and

Mostashari updated the framework for types of complexity that “includes three types of structural complexity (size, connectivity, and architecture), two types of dynamic complexity (short-term and long-term), and one additional type, socio-political complexity.” (Sheard & Mostashari, A Complexity Typology for Systems Engineering,

2010) Sheard’s Types of System Complexity Framework, used as the basis for her dissertation, was based on six types of complexity including: structural complexity (size)

{SS}; structural complexity (connectivity) {SC}; structural complexity (inhomogeneity)

{SI}; dynamic complexity (short term) {DS}; dynamic complexity (long term) {DL}; and Socio-political complexity {SP} as shown in Figure 3-4.

Figure 3-4: Sheard's Types of Complexity Framework Applied to Entities (Sheard S. A., Assessing the Impact of Complexity Attributes on System Development Project Outcomes, 2012, p. 58)

Sheard’s dissertation described the intended primary output as “a list of complexity measures that are shown to have a statistically significant impact on specific project

103

outcomes, including cost overrun, schedule delay, and performance shortfall, and possibly others” by looking for association between “potentially complex attributes and system outcomes” eventually surveying senior INCOSE engineers on 52 input variables and their relationships to other variables in 10 groups: Project management (10 variables); Project basics (9 variables); Size (9 variables); Requirements (8 variables);

Stakeholders (10 variables); Conflict (4 variables); Uncertainty (2 variables); Changes (5 variables); Skills (2 variables); and, Precedence (4 variables) eventually collecting 75 usable responses from the 121 received. (Sheard S. A., Assessing the Impact of

Complexity Attributes on System Development Project Outcomes, 2012)

Sheard’s Types of System of Complexity Framework was considered for use because it attempted to identify predictive complexity measurements. Sheard wrote “While no causality was demonstrated, it is shown that projects with lower values of three specific complexity measures have better outcomes of all kinds. Those measures are the number of difficult requirements, the “cognitive fog” experienced within the project, and the

‘stakeholder involvement’ characteristics;” however, Sheard noted limitations including qualitative not quantitative results and observed that by its nature, a retrospect survey is only able to assess projects that have finished. (Sheard S. A., Assessing the Impact of

Complexity Attributes on System Development Project Outcomes, 2012). This complexity framework was not selected due to the reliance on subject judgement which does not allow for independent validation or verification of response and because of the retrospective nature of the survey.

3.1.3.4 Deductive Reasoning

The New World Encyclopaedia and Trochim describe deductive reasoning as follows:

104

Deduction means determining the conclusion. It is using the rule and its precondition to make a conclusion. Example: "When it rains, the grass gets wet. It rains. Thus, the grass is wet." Mathematicians are commonly associated with this style of reasoning. It allows deriving b as a consequence of a. In other words, deduction is the process of deriving the consequences of what is assumed. Given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. A deductive statement is based on accepted truths. For example, all bachelors are unmarried men. It is true by definition and is independent of sense experience. (New World Encyclopedia, 2017)

{As shown in Figure 3-5} Deductive reasoning works from the more general to the more specific. Sometimes this is informally called a "top-down" approach. We might begin with thinking up a theory about our topic of interest. We then narrow that down into more specific hypotheses that we can test. We narrow down even further when we collect observations to address the hypotheses. This ultimately leads us to be able to test the hypotheses with specific data – a confirmation (or not) of our original theories (Trochim, 2006)

Figure 3-5: Deductive reasoning begins with Theory and is therefore dependent on existence of acceptable theory (Trochim, 2006)

Since deductive arguments attempt to demonstration that a conclusion must be true provided that the premises are true, the primary objection to the application of deductive reasoning here are the absence of SE premises that are generally accepted as true (i.e., SE

“accepted truths” or SE industry accepted theory). Specific issues with foundational SE theory that impact development of a diagnostic assessment model include the lack of industry-wide recognition on the naming, number or need for SEMs other than traditional or classical SE.

105

3.1.3.5 Inductive Reasoning

The New World Encyclopaedia and Trochim describe inductive reasoning as follows:

Induction means determining the rule. It is learning the rule after numerous examples of the conclusion following the precondition. Example: "The grass has been wet every time it has rained. Thus, when it rains, the grass gets wet." Scientists are commonly associated with this style of reasoning. It allows inferring some a from multiple instantiations of b when a entails b. Induction is the process of inferring probable antecedents as a result of observing multiple consequents. An inductive statement requires perception for it to be true. For example, the statement, "it is snowing outside" is invalid until one looks or goes outside to see whether it is true or not. Induction requires sense experience. (New World Encyclopedia, 2017)

{As shown in Figure 3-6} Inductive reasoning works the other way, moving from specific observations to broader generalizations and theories. Informally, we sometimes call this a “bottom up” approach. In inductive reasoning, we begin with specific observations and measures, begin to detect patterns and regularities, formulate some tentative hypotheses that we can explore, and finally end up developing some general conclusions or theories. (Trochim, 2006)

Figure 3-6: Inductive reasoning begins with observation and is therefore dependent on existence of measurement (Trochim, 2006)

Since inductive reasoning attempts to drive theory from some number of observations, validity of results relies on inferential statistics. According to Salkind,

“inferential statistics are used to infer something about the population from which the sample was drawn based on the characteristics of the sample.” (Salkind, 2012, p. 177)

Salkind wrote “the central limit theorem is in many ways the basis for inferential statistics” adding “the critical link between obtaining the results from the sample and

106

being able to generalize the results to the population is the assumption that repeated sampling from the population will result in a set of scores that are representation of the population.” (Salkind, 2012, p. 178) French wrote:

Scientific induction and statistical inference begins with the assumption that we have made enough progress to have some understanding of cause and effect, perhaps even a hypothesis or putative model. The methodology of science has focused much more on the testing and validation of models and theories and the estimation of parameters, that is, those processes that fall in the known and knowable spaces. Repeatability has come to lie at the heart of the scientific induction. Scientific models and theories can only be validated if they can be tested again and again in identical circumstances and shown to explain and predict system behaviours. (French, 2012)

The central limit theorem states that the sampling distribution of the mean of any independent, random variable will be normal or nearly normal, if the sample size is large enough. How large is large enough? The answer depends on two factors: requirements for accuracy and shape of the underlying population. The more closely the sampling distribution needs to resemble a normal distribution (i.e., accuracy and confidence), the more sample points will be required. The more closely the original population resembles a normal distribution (e.g., shape), the fewer sample points will be required. In practice, statisticians suggest a sample size of between 30 and 40 samples when the population distribution is roughly bell-shaped. But if the original population is distinctly not normal

(e.g., is badly skewed, has multiple peaks, and/or has outliers), researchers like the sample size to be even larger.

The primary objection to the application of inductive reasoning here is the inability to define an attribute that measures system complexity in situ complicated by the inability to obtain, assuming a system complexity is identified, a large enough set of independent and identically distributed samples from observation of non-static, non-repeatable program environments at unique points in time.

107

3.1.4 Plans for Validation

This section describes plans for validation of the general model SEMDAM. Section 4,

SEMDAM Applied to an Empirical Case Study, provides validation of SEMDAM via an empirical case study demonstrating its’ use. IEEE defines verification as the

“confirmation, through the provision of objective evidence, that specified requirements have been fulfilled” while defining validation as “confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 10) Verification using methods inspection, analysis, demonstration, and/or test ensures that the system is made “right”, while validation using comparative assessment ensures that the “right” system is made.

According to the New World Encyclopaedia:

Abductive validation is the process of validating a given hypothesis through abductive reasoning. Under this principle, an explanation is valid if it is the best possible explanation of a set of known data. The best possible explanation is often defined in terms of simplicity and elegance (such as Occam's razor). (New World Encyclopedia, 2017)

Checkland wrote that the basis for acceptance of a scientific finding may range from the strong criteria of repeatability – “for findings to be accepted as part of the body of

‘scientific knowledge’ they have to be repeatable, time and again, by scientists other than those who first discovered them” to the weak criteria of plausibility – “does this finding make a believable story?” (Checkland, 2000) To support the need for independent validation of this model, this paper incorporates advice from Checkland:

Research should be conducted in such a way that the whole process is subsequently recoverable by anyone interested in critically scrutinizing the research. This means declaring explicitly, at the start of the research, the intellectual frameworks and the process of using them which will be used to define what counts as knowledge in this piece of research. (Checkland, 2000)

108

The theoretical foundation of SEMDAM is based on the ontological foundation inherent within the Cynefin complexity framework. Dietz et al., wrote “ontological theories are theories about the nature of things. They address explanatory and/or predictive relationships in observed phenomena;” adding:

Ontological theories are valuated by their soundness and their appropriateness. The soundness of an ontological theory is established by its being rooted in sound philosophical theories. The appropriateness of an ontological theory is established by the evaluation of its practical application, e.g., through expert judgments. (Dietz, et al., 2013)

Sections 3.1.1, Research Context; 3.1.2, Methodological Formulation; and 3.1.3,

Alternatives Considered, provided detailed explanation on the underlying theories to support validate of the soundness of SEMDAM.

Validation of the appropriateness of SEMDAM requires comparative analysis on:

SEMs used, complexity framework used, and independent validation of the theory that

COSP impacts program execution.

Comparative analysis on the fit for purpose and fit for usage of SEMs is provided in

Section 2, Literature Research. Comparative analysis of the appropriateness of SEMs used is graphically depicted Figure 3-7, by Dr. Brian White showing how practice drives theory in the years since 1950 and in Figure 3-8, by Drs Gorod, Ghandi, White, Ireland, and Sauser describing the Relative Difficulty of Engineering Various Types of Systems.

109

System of Systems Engineering

Enterprise Systems Engineering Concept Practice Complex Systems Theory Engineering

Systems Engineering

0 20 40 60 80

Years Since 1950

Figure 3-7: White on How Practice Drives Theory in the Years Since 1950 (White B. E., On a Maturity Model for Complexity, Complex Systems, and Complex Systems Engineering, 2016)

Both representations by individual or groups of recognized experts depict the same four SEMS used in SEMDAM.

The most general Complex System might be Degree of Difficulty Can the most difficult type of system to engineer. Increase in This Direction Complex System

Enterprise

System of Systems

System

Figure 3-8: Relative Difficulty of Engineering Various Types of Systems (Gorod, Gandhi, White, Ireland, & Sauser, 2015, p. 26)

110

Appropriateness of the Cynefin framework to determine COSP is described in

Sections 2.5, Cynefin sense-making Framework and 3.1.1, Research Context.

Alternatives complexity frameworks considered are described in Section 3.1.3.

Independent validation that COSP impacts project execution is provided by Sheard who wrote, “projects with a lower value of complexity had better outcomes … than the higher complexity set of projects” (Sheard S. A., Assessing the Impact of Complexity

Attributes on System Development Project Outcomes, 2012, p. 108) and Roberts,

Mazzuchi and Sarkani who wrote “Results from 526 programs analyzed and among experts surveyed, suggest that current processes and/or cost estimates for the design and development of major weapons programs are less suited for complex systems” adding

“SoS programs have a significantly higher likelihood of overrunning cost than PLA or

SYS programs; and PLA programs have a higher likelihood of overrunning than SYS programs.” (Roberts, Mazzuchi, & Sarkani, 2016) In the Roberts study, COSP was ranked as “System, Platform, and SoS” increasing from System to SoS.

This research design is a combination of nonexperimental research including description, correlational, and qualitative methods observational study and not design of experiments. Maier provided verification of the use for nonexperimental versus experiment research for system engineering studies when he wrote:

It would be desirable to test the proposed heuristics in a broader way through detailed case study. As in most systems engineering studies, formal experiment is not really possible. We don’t build duplicate complex systems by different methods just to see what would happen. We can look retrospectively at built systems to test the applicability of heuristics, however. (Maier, Architecting Principles for Systems-of- Systems, 1998)

111

3.2 SEMDAM Introduction & Systematic Description

SEMDAM, based on the Cynefin sense-awareness framework, provides a recommendation for an appropriate SEM and/or a periodic reassessment of the appropriateness of an in situ SEM using a Diagnostic Assessment Model (DAM).

SEMDAM measures PM/SEM understanding of system complexity by evaluating statistically significant association of a priori prediction of system output or a posteriori perception of system response. The distinction of diagnostic versus decision is because the model is based on abduction, or inference to the best explanation, which is a logical method of reasoning in which one chooses the hypothesis that would, if true, best explain the relevant evidence. The delineation of assessment versus analysis is based on the understanding of the difference between the two. According to the Merriam-Webster dictionary, analysis is defined as “a detailed examination of anything complex in order to understand its nature or to determine is essential features” (Merriam-Webster, 2017) while assessment is defined as “the action or an instance of making a judgement about something.” (Merriam-Webster, 2017)

Figure 3-9: System Engineering Method Diagnostic Assessment Model (SEMDAM) shows the process flowchart of SEMDAM. Figure 3-9 incorporates the abductive logic model and addresses the initial test for Disorder. Figure 3-9 solidifies the relationship to the Cynefin framework by highlighting the search for cause and effect explicitly described in “Obtain Evidence for Cause & Effect Analysis.” Figure 3-9 shows the expected output is a recommendation for a SEM appropriate to the COSP.

SEMDAM is intended for use by Program Management (PM) and/or SE Management

(PM/SEM) in service organizations, system development organizations, system

112

integrators (SI), or lead system integrators (LSI) that execute programs or projects that include the requirement for a system or software development lifecycle (S/SDLC) to deliver systems or services. SEMDAM supports two use cases:

(1) An aperiodic assessment tool for initial or one-time use; and,

(2) A periodic situational model to reassess appropriateness of an in situ SEM.

The following sections describe the SEMDAM methodology including specification of the COSP hypotheses using the formulation of abductive logic rules described in

Section 3.1.2.1, Abductive Reasoning above. Depending on results, there is potential for up to five abductive logic tests – each identified using “ABDX.”

This research is based on observational study and not design of experiments; therefore, SEMDAM’s statistical models look for correlation rather than causality.

SEMDAM is based on abductive reasoning, also referred to as diagnosis, which typically begins with an incomplete set of observations and proceeds to the likeliest possible explanation. Analogous to a medical diagnosis, SEMDAM does not guarantee selection of the “best” SE method. SEMDAM was influenced by the research into the science of administrative behavior and executive decision making by Herbert Simon in the 1960s and has adopted the goal of satisfactory rather optimum. (Checkland, 2000) SEMDAM does not claim to recommend an optimal SEM; rather, SEMDAM provides a recommendation of a satisfactory SEM that is appropriate for the inferred COSP. Each of the tasks or hypothesis tests are described using an Input – Process – Output frame of reference, and giving the potential for branching logic, includes a Next section with directions. Each task or hypothesis is introduced with an Overview section that provides reference to related sections of INCOSE’s SEH (2014 edition) or IEEE 15288:2015(E).

113

114

Figure 3-9: System Engineering Method Diagnostic Assessment Model (SEMDAM)

3.2.1 Step 1 – Gather Evidence of SE&M Activity

Overview – The objective of SEMDAM is to provide a recommendation for a complexity appropriate SEM based on COSP. Implicit in the name COSP is the fundamental requirement to obtain a clear and concise understanding of the System

Problem under evaluation or development. Shenhar and Sauser wrote that “no complex system can be created by a single person thus systems engineering is strongly linked to management. We therefore need to combine the two fields and talk about systems engineering management.” (Shenhar & Sauser, 2009, p. 117) Their definition of system engineering management as “the application of scientific, engineering, and managerial efforts” included the requirement to “Work with clients to ensure that the system created is qualified to address required needs and solve clients’ problems.” (Shenhar & Sauser,

2009, p. 119)

Input – External stakeholders, PM or SE&M leadership initiate an assessment.

Process – INCOSE wrote “Every man‐made system has a life cycle, even if it is not formally defined” adding “A life cycle can be defined as the series of stages through which something (a system or manufactured product) passes.” (INCOSE SEH, 2015, p.

25) IEEE wrote “The purpose of the Life Cycle Model Management process is to define, maintain, and assure availability of policies, life cycle processes, life cycle models, and procedures for use by the organization.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 23)

Both INCOSE and IEEE expect the use of a life cycle model and associated artifacts.

This step searches for any artifacts of SE&M activity or external evidence of SE&M documentation including discussion of COSP if available.

Output – Evidence of SE&M activity, if found.

115

Next – Proceed to Section 3.2.2, Step 2 – Evaluate Hypothesis ABD1 & Interpret if

COSP is Disorder.

3.2.2 Step 2 – Evaluate Hypothesis ABD1 & Interpret if COSP is Disorder

Overview – As an initial test, SEMDAM seeks evidence of consideration, use or attempted use of systems engineering and management (SE&M) methodologies. Artifacts that would indicate that SE&M has been considered include identification or consideration of a SDLC, listing or consideration of requirements, evidence of analysis of alternatives or the existence of problem analysis artifacts such as models or simulations.

Previous assessments of COSP would indicate that SE&M has been considered so previous assessments would be evidence as well.

Input – Evidence of SE&M activity from Step 1, if found; and, Candidate

Expressions are: Disorder, Known, Knowable, Complex, Chaos.

Process – Using the lack of evidence provided and candidate explanations, evaluate hypothesis ABD1.

Given evidence (E => No evidence of system engineering and

management {SE&M}) and candidate explanations: ABD1 (3) Disorder, Known, Knowable, Complex, or Chaos of E,

infer the truth of that Disorder best explains E.

Output – If we fail to reject ABD1, infer that COSP is Disorder. Proceed to Step 10.

Next – If we reject ABD1, infer COSP is NOT Disorder. Assume COSP is Known and proceed to Step 3.

116

3.2.3 Step 3 – IV&V Program and SE Management

Overview – This task performs independent verification and validation (IV&V) of the

SE activities in IEEE 15288 paragraph 6.2.1, Life cycle model management process, and/or INCOSE paragraph 7.1, Life Cycle Model Management process. Since COSP is

Not Disorder, this task assumes IEEE 15288 6.4.2, Stakeholder needs and requirements definition process, or a similar set of activities, has previously taken place and their artifacts are available for review and analysis. Shenhar and Sauser wrote “the SE efforts

… start with the identification of need … which must be matched with a technical feasibility of a system that will be capable of addressing this need.” (Shenhar & Sauser,

2009, p. 120) Gilbertson, Tanju and Eveleigh wrote “SEM ‘gets the big picture’ ensuring that the SE team observes the need, is properly oriented appropriately to focus on the opportunity, makes decisions when necessary, and spurs the SE team into action understanding expectations and constraints. (Gilbertson, Tanju, & Eveleigh, 2017)

Input – Rejection of hypothesis ABD1. Previously identified Stakeholder Needs.

Process – This task independently verifies and validates that “stakeholder requirements are defined considering the context of the system-of-interest with the interoperating systems and enabling systems.” (ISO/IEC/IEEE 15288:2015(E), 2015, p.

51) If the stakeholder needs are not defined, this task would then identify the stakeholders and their needs from analysis of the environment, review of external documentation or other means. MITRE wrote “In the context of a systems engineering life cycle, an operational needs assessment forms the basis for defining requirements for a program and a system” adding that the assessment must “Determine the specific requirements of the

117

needs assessment process that apply” and “Identify specific stakeholders … including their responsibility, goals, and roles/relationships.” (MITRE, 2014, p. 281)

Output – Validated Stakeholder Need(s).

Next – Proceed to Step 4

3.2.4 Step 4 – IV&V Business or Mission Analysis

Overview – This task performs IV&V of the SE activities described in IEEE section

6.4.1, Business or Mission Analysis process, and/or INCOSE section 4.1, Business or

Mission Analysis process. IEEE wrote that the purpose of BMA is to, “define the business or mission problem or opportunity, characterize the solution space, and determine potential solution class(es) that could address a problem or take advantage of an opportunity” noting that “this process interacts with the organization’s strategy, which is generally outside the scope of 15288.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 48)

MITRE wrote that BMA should “Define the sponsor’s and customer’s problem or opportunity from a comprehensive, integrated perspective.” (MITRE, 2014, p. 16) IEEE wrote the BMA process “provides for the definition of the problem space and characterization of the solution space, including the relevant trade-space factors and preliminary life cycle concepts. This includes developing an understanding of the context and any key parameters, such as the critical quality characteristics.” (ISO/IEC/IEEE

15288:2015(E), 2015, p. 95)

Input – COSP is NOT Disorder & assumed Known. Validated Stakeholder Need(s).

Process – This task performs an independent verification and validation (IV&V) that the mission problem or opportunity is noted. If the system problem or opportunity are not defined, this task would then identify the system problems or opportunities from analysis

118

of the environment, review of external documentation or other means. Step 4 ensures that the noted problem or opportunity description has been updated to reflect any subsequent updates or refinements since program inception. INCOSE wrote:

Too often, the system definition is viewed as a linear, sequential, single pass through the processes. However, valuable information and insight need to be exchanged between the processes, in order to ensure a good system definition that effectively and efficiently meets the mission or business needs. The application of iteration and recursion to the life cycle processes with the appropriate feedback loops helps to ensure communication that accounts for ongoing learning and decisions. This facilitates the incorporation of learning from further analysis and process application as the technical solution evolves. (INCOSE SEH, 2015, p. 32)

Output – Validated Problem or Opportunity (i.e., system problem)

Next – Proceed to Step 5.

3.2.5 Step 5 – Obtain Evidence for Cause and Effect Analysis

Overview – The phrase, “if all you have is a hammer, everything looks like a nail” is a criticism of using a familiar approach to solve all problems when it may be more appropriate to use more difficult or less familiar. IEEE wrote “the detail of the life cycle implementation within a project is dependent upon the complexity of the work, the methods used, and the skills and training of personnel involved in performing the work.”

(ISO/IEC/IEEE 15288:2015(E), 2015, pp. 24, 102)

This research focuses on measurement due in large part to the advice of Sage who wrote “success in implementation of systems engineering is critically dependent on the availability of appropriate measurements” adding, “management and measurement are irretrievably interconnected.” (Sage A. P., Systematic Measurements, 2009, p. 575) Sage wrote that, “one major need in all of systems engineering and systems management, at all levels, is to obtain the information and knowledge necessary to organize and direct” adding, “this information can only be obtained through an appropriate program of

119

systematic measurements and development of appropriate models for use in processing this information.” (Sage A. P., Systematic Measurements, 2009, p. 575)

Step 3 – IV&V Program and SE Management and Step 4 – IV&V Business or

Mission Analysis ensure that this task has a clear vison of the intended system avoiding what MITRE called “The most common problem about program performance cited in research … is that the program’s goals/objectives have not been identified” adding “It is impossible to develop measures of progress if we do not know where we are trying to go.” (MITRE, 2014, p. 76)

Step 5 is the core component of SEMDAM where the Cynefin sense-awareness framework is used to categorize the relationship between system input (cause) and system output (effect) by identifying attributes and appropriate statistical models for analysis.

SEMDAM provides a practical approach for recommending a complexity appropriate

SEM among competing alternatives without requiring complete knowledge. There is a substantial distinction between assumed COSP and inferred COSP – assumed COSP is where SEMDAM starts (after ruling out an inferred COSP of Disorder in Step 2) and inferred COSP is the basis for the final activity described in Section 3.2.10, Step 10 –

Recommend SEM based on Inferred COSP.

Input – Rejection of hypothesis ABD1, Validated Stakeholder Needs, Validated

Problem or Opportunity, and assumed COSP.

Process – Identify candidate attributes per Section 3.3, Attribute Selection Method. If

COSP is assumed to be Known, make a prediction for a candidate attribute and then gather data on the candidate attribute. Later, validate the accuracy of that prediction by measuring the statistical significance of the prediction using one of the statistical models

120

in Section 3.4, Statistical Model Selection. If COSP is assumed to be Knowable, make a prediction for a candidate attribute that requires expert knowledge or special investigation. Later, validate the accuracy of that prediction using one of the statistical models in Section 3.4, Statistical Model Selection. If COSP is assumed to be Complex, attempt to identify a historical trend for a candidate attribute and validate the accuracy of that perception using one of the statistical models in Section 3.4, Statistical Model

Selection. If COSP is assumed to be Chaos, validate previous rejections of ABD1, ABD2,

ABD3, and ABD4.

Output – Evidence and Candidate Explanation(s)

Next – The next step depends upon the assumed COSP. If assumed COSP is Known, proceed to Section 3.2.6. If assumed COSP is Knowable, proceed to Section 3.2.7. If assumed COSP is Complex, proceed to Section 3.2.8. If assumed COSP is Chaos, proceed to Section 3.2.9.

3.2.6 Step 6 – Evaluate Hypothesis ABD2 & Interpret if COSP is Known

Overview – As a second test, if necessary, SEMDAM looks for statistically significant evidence of a prior prediction of cause and effect using the proposed attribute(s) (see Section 3.3, Attribute Selection) using statistical method appropriate to attributes (see Section 3.4, Statistical Model Selection). Since Disorder has been rejected, the list of candidate explanations has been updated.

Input – Statistically significant Evidence of a priori cause and effect from Step 5; and, Candidate Explanations: Known, Knowable, Complex or Chaos.

Process – Using the evidence provided and updated candidate explanations, evaluate hypothesis ABD2.

121

Given evidence (E => Evidence of a priori cause and effect) and

ABD2 candidate explanations: Known, Knowable, Complex or Chaos of E, (4)

infer the truth of that Known best explains E.

Output – If we fail to reject ABD2, infer that COSP is Known. Proceed to Step 10.

Next – If we reject ABD2, infer COSP is NOT Known. Assume COSP is Knowable and proceed to Section 3.2.5, Step 5 – Obtain Evidence for Cause and Effect Analysis.

3.2.7 Step 7 – Evaluate Hypothesis ABD3 & Interpret if COSP is Knowable

Overview – As a tertiary test, if necessary, SEMDAM looks for statistically significant evidence of a prior prediction of cause and effect requiring special investigation or expert knowledge using the proposed attribute(s) (see Section 3.3,

Attribute Selection) using statistical method appropriate to attributes (see Section 3.4,

Statistical Model Selection). Since Known has been rejected, the list of candidate explanations has been updated.

Input – Statistically significant Evidence of a priori cause and effect requiring special investigation or expert knowledge from Step 5; and, Candidate Explanations: Knowable,

Complex or Chaos.

Process – Using the evidence provided and updated candidate explanations, evaluate hypothesis ABD3.

Given evidence (E => Evidence of a priori cause and effect that

requires expert knowledge or special investigation) ABD3 (5) and candidate explanations: Knowable, Complex or Chaos of E,

infer the truth of that Knowable explains E.

122

Output – If we fail to reject ABD3, infer that COSP is Knowable. Proceed to Step 10.

Next – If we reject ABD3, infer COSP is NOT Knowable. Assume COSP is Complex and proceed to Step 3.

3.2.8 Step 8 – Evaluate Hypothesis ABD4 & Interpret if COSP is Complex

Overview – As a fourth test, if necessary, SEMDAM looks for statistically significant evidence of a posteriori perception of cause and effect using the proposed attribute(s)

(see Section 3.3, Attribute Selection) using statistical method appropriate to attributes

(see Section 3.4, Statistical Model Selection). Since Knowable has been rejected, the list of candidate explanations has been updated.

Input – Statistically significant Evidence of a posteriori cause and effect from Step 5; and, Candidate Explanations: Complex or Chaos.

Process – Using the evidence provided and updated candidate explanations, evaluate hypothesis ABD4.

Given evidence (E => Evidence of a posteriori cause and effect)

ABD4 and candidate explanations: Complex or Chaos of E, (6)

infer the truth of that Complex best explains E.

Output – If we fail to reject ABD4, infer that COSP is Complex. Proceed to Step 10.

Next – If we reject ABD4, infer COSP is NOT Complex. Assume COSP is Chaos and proceed to Section 3.2.5, Step 5 – Obtain Evidence for Cause and Effect Analysis.

123

3.2.9 Step 9 – Evaluate Hypothesis ABD5 & Interpret if COSP is Chaos

Overview – As a fifth and final test, if necessary, SEMDAM evaluates the consequent categorization of COSP as Chaos. Since Complex has been rejected, the list of candidate explanations has been updated.

Input – Absence of statistically significant evidence of a prior or a posteriori cause and effect from previous tests; and, Candidate Explanation: Chaos.

Process – Verify previous rejection of hypothesis tests ABD1, ABD2, ABD3, and

ABD4. Using the lack of evidence provided and updated candidate explanation, evaluate hypothesis ABD5.

Given evidence (E => No evidence of a prior or a posteriori

cause and effect) ABD5 (7) and candidate explanation: Chaos of E,

infer the truth of that Chaos best explains E.

Output – If we fail to reject ABD5, infer that COSP is Chaos. Because Chaos and

Disorder are similar in that neither would provide evidence of a priori or a posteriori cause and effect, Chaos, failure to reject ABD5, is dependent on previous rejection of

ABD1 verifying that COSP is not Disorder. Proceed to Section 3.2.10, Step 10 –

Recommend SEM based on Inferred COSP. Step 10.

Next – If we reject ABD5, infer COSP is NOT Chaos and realize that one of the previous hypothesis tests was incorrect. Assume COSP is Known and proceed to Section

3.2.5, Step 5 – Obtain Evidence for Cause and Effect Analysis.

124

3.2.10 Step 10 – Recommend SEM based on Inferred COSP

Overview – Applying the Cynefin sense-awareness framework with complexity domains associated with observation of cause and effect, SEMDAM provides a recommendation for a COSP appropriate SEM based on measuring statistically significant association for a priori prediction of system outcomes or a posteriori perception of system response.

Input – Inferred COSP.

Process – Using inferred COSP as input, lookup the associated complexity appropriate SEM from Table 2-11.

Output – SEMDAM provides a recommendation of a complexity appropriate SEM based on the analysis of evidence gathered. SEMDAM is based on abduction, also referred to as inference to the best explanation which is the style of reasoning commonly associated with medical doctors and detectives. Therefore, a complexity appropriate SEM is recommended subject to the understanding that “unlike deduction and induction, abduction can produce results that are incorrect within its formal system.” (New World

Encyclopedia, 2017)

Next – If SEMDAM was conducted as an aperiodic assessment or periodic situational model to reassess appropriateness of an in situ SEM where the inferred COSP aligns with the in situ SEM, the SEMDAM artifacts, including SEM recommendation, attributes used, and tests conducted, should be retained for potential future re-use.

If SEMDAM was conducted as an initial assessment and the inferred COSP was previously unknown or use of a SEM was not considered (i.e., Disorder), refer to Section

125

2.2, Definition of System Used, and begin the process of selecting and tailoring a SEM by defining the problem space.

If; however, SEMDAM was conducted as a periodic situational model to reassess appropriateness of an in situ SEM where the inferred COSP did not align with the in situ

SEM, PM/SEM needs to reassess continued use of the in situ SEM to avoid system miscategorization and potentially system failure. Snowden and Boone wrote:

Good leadership requires openness to change on an individual level. Truly adept leaders will know not only how to identify the context they’re working in at any given time but also how to change their behavior and their decisions to match that context. They also prepare their organization to understand the different contexts and the conditions for transition between them.

In the complex environment of the current business world, leaders often will be called upon to act against their instincts. They will need to know when to share power and when to wield it alone, when to look to the wisdom of the group, and when to take their own counsel. A deep understanding of context, the ability to embrace complexity and paradox, and a willingness to flexibly change leadership style will be required for leaders who want to make things happen in a time of increasing uncertainty. (Snowden & Boone, A Leader's Framework for Decision Making, 2007)

Selection of a COSP appropriate SEM when the inferred COSP did not align with the in situ SEM, does not automatically imply that a SEM capable of addressing more complexity is required. The activity of selecting attributes and analysis models is designed to increase knowledge of the COSP and the actual problem space. Not unlike a medical diagnosis, SEMDAM may first exclude several bad options before narrowing down the recommendation.

3.3 Attribute Selection Method

This section describes the requirements that SEMDAM places on attributes: valuable, nontrivial, and measurable; describes attribute data types: categorical and continuous; and describes attribute selection for both Design of Experiments (DoE) and observational studies.

126

3.3.1 Valuable, Nontrivial, and Measurable Data

SEMDAM is dependent on identification of attributes that may be used to predict or perceive patterns in system output based on valuable, nontrivial, and measurable data.

Regarding measurable data, Roedler and Jones wrote: “There are three key measurement concepts that form the basic building blocks for successful measurement application. They are:

Measurement is a consistent but flexible process that must be tailored to the unique information needs and characteristics of a particular project or organization. These information needs usually change during the life cycle as the environment changes, milestones are accomplished, performance parameters are achieved, risks are treated, etc. Changing information needs drive changes to the measures.

Decision makers must understand what is being measured. Key decision makers, including both technical and business managers, must be able to connect “What is Being Measured” to “What they need to know.” Measurement must deliver value- added objective results that can be trusted on the day-to-day issues that these managers face.

Measurement must be used to be effective. The measurement program must play a role in helping decision makers understand project and organization issues and to evaluate and make key trade-offs to optimize overall performance.

These three basic measurement concepts appear to be common sense, but are often ignored. They need to be ingrained in the project and organization to effectively apply measurement. (Roedler & Jones, 2005, p. 22)

3.3.2 Attribute Data Types

The selection of attributes for analysis impacts the type of statistical modelling available. Selecting attributes requires analysis of both inputs (X) and outputs (Y). Inputs and outputs may be either categorical or continuous. Gygi, Covey, DeCarlo, and

Williams described the two data types – categorical and continuous – with descriptions and examples described in Table 3-3. The type of data impacts selection of appropriate stastical analysis methods which are described in Section 3.4.

127

Table 3-3: Characterization of Data Types Data Type Description Examples Data observations fall into discrete, Eye color: brown, blue, Category named value categories green Location: Factory 1,

Factory 2, Factory 3 No mathematical operations can be Inspection results:

performed on the raw data pass, fail Size: large, medium,

small Fit check: go, no-go Questionnaire

response: yes, no You can count the number of Attendance: present,

occurrences you see of each category absent Employee: Fred,

Suzanne, Holly Processing: Treatment

A, Treatment B Data observations can take on Bank account balance: Continuous numerical value and aren’t confined to dollars nominal categories Length: meters Time: seconds Electric Current: amps (Gygi, Covey, DeCarlo, & Williams, 2012, p. 114)

3.3.3 Identification of Key Decision Attributes by Design of Experiment

An experiment purposely sets and controls input values by control and/or modification of the process or system being studied. Because variables are controlled in the design of experiments (DoE) and test runs can be randomly ordered, statistically significant results can lay claim to causation. Montgomery wrote:

In general, experiments are used to study the performance of processes and systems. The process or system can be represented by a general model of a process or system, Figure 3-10. We can usually visualize the process as a combination of operations, machines, methods, people, and other resources that transforms some input (often a material) into an output that has one or more observable response variables. Some of

128

the process variables and material properties x1 through xo are controllable, whereas other variables z1 through zq are uncontrollable. (Montgomery, 2013, p. 3)

Figure 3-10: General model of a process or system (Montgomery, 2013, p. 3)

If DoE is possible, there are multiple DoE strategies available to identify significant attributes:

. Best-Guess Approach – This approach, which is frequently used in practice by

scientists and engineering involves selecting an arbitrary combination of factors

and see what happens. Disadvantages of this approach include: it could take a

long time or be impossible to schedule; and, there is no guarantee that the best

solution has been achieved or ever will be achieved;

. One-Factor-at-a-Time (OFAT) Approach – This approach involves selecting a

starting point or baseline set of levels, for each factor, and then successively vary

each factor over its range with the other factors held constant at the baseline level.

The main disadvantage of this approach is that it fails to consider any possible

interaction between the factors; and,

129

. Factorial or Fractional Factorial Approach – In this approach, factors are varied

together instead of one at a time which makes the most efficient use of the

experiment data.

3.3.4 Identification of Key Decision Attributes by Observational Study

An observational study measures outputs, which can be response or dependent variables, of interest when it is impossible or impactable to control, manipulate or influence the system, including inputs or other factors, under study. The observer acts as an outside observer, recording data and/or events as they happen, or happened, in order to gain understanding from careful review of the environment. Finding statistically significant results during an observational study, allows the claim of correlation and association, not causation since observation studies don’t or can’t control any variables.

Bharathy and McShane wrote “a causal link is ascribed between two variables when the modeler believes that what happens in an independent variable was cause for some consequence in a dependent variable.” They wrote, “Systems dynamics is a causal modelling approach that evolved out work on feedback control systems owing to systems dynamics’ ability to handle complex inter-relationships, nonlinearity, and feedback loop structures and time delays” adding, “a main tenet of causal modelling is that modelling each component individually and aggregating the components is not enough to determine the behaviour of a system.” (Bharathy & McShane, 2014)

Gygi, Covey, DeCarlo, and Williams wrote “reduction of a large collection of potential factors down to a smaller area of focus is called data mining” adding “be certain choices are guided by the data rather than by opinion or guesses.” (Gygi, Covey,

DeCarlo, & Williams, 2012, p. 209) Kantardzic wrote:

130

The need to understand large, complex, information-rich data sets is common to virtually all fields of business, science, and engineering. Data mining is an iterative process within which progress is defined by discovery, through either automatic or manual methods. Data mining is most useful in an exploratory analysis scenario in which there are no predetermined notions about what will constitute an “interesting” outcome. Data mining is the search for new, valuable, and nontrivial information in large volumes of data. It is a cooperative effort of humans and computers.

In practice, the two primary goals of data mining tend to be prediction and description. Prediction involves using some variables or fields in the data set to predict unknown of future values of other variables of interest. Description, on the other hand, focuses on finding patterns describing the data that can be interpreted by humans. System identification is not a one-pass process: both structure and parameter identification need to be done repeatedly until a satisfactory model is found. (Kantardzic, 2011, p. 2)

Gygi, Covey, DeCarlo, and Williams describe a progressive and iterative approach to identify critical system attributes that begins with observational studies and leads to DoE shown in Figure 3-11. Regarding screening, characterization, and optimzation experiments, Gygi, Covey, DeCarlo, and Williams wrote:

Screening experiments: The whole point of this stage is to quickly verify which factors have a significant effect on the output. When you start investigating a process or system, your experiments are designed to handle a large number of factors or variables because you identify all the possible Xs that may be influencing the output Y. But not all of those inputs affect the output, so screen them out.

Characterizing experiments: When you’ve screened out the unimportant variables, your experiments focus on characterizing and quantifying the effect of the remaining critical few inputs. These characterization experiments reveal what form and what magnitude the critical factors take in the Y = f(X) + ε equation for your process or system.

Optimization experiments: After characterizing your process or system, the final step is to conduct optimization experiments to find the best settings of the Xs to meet your Y goal. Your goal may be to maximize or minimize the value of the output or to hit a certain target level. More often, your goal is simply to minimize the amount of variation in the output Y. (Gygi, Covey, DeCarlo, & Williams, 2012, p. 265)

131

Figure 3-11: SEMDAM Attribute Selection is Based on Six Sigma’s Progressive and Iterative Approach to Identify Critical Outcomes (Gygi, Covey, DeCarlo, & Williams, 2012, p. 265) {Observational Study & DoE added}

3.3.5 Describing Trends in Attribute Data

SEMDAM is structured to work across a broad range of COSPs including Complex or

Chaos where theory posits that is not possible to optimize system response. Ackoff wrote of a useful concept to consider when optimization is not possible:

‘Satisficing’ is a remarkably useful term that was coined by Herbert A. Simon to designate efforts to attain some level of satisfaction, but not necessarily to exceed it. To satisfice is to do ‘well enough’, but not necessarily ‘as well as possible’. The level of attainment that defines ‘satisfaction’ is one that the decision maker is willing to settle for.

The satisficing approach to planning is usually defended with the hard-to-refute argument that it is better to produce a feasible plan that is not optimal than an optimal plan that is not feasible. (Ackoff, 1970)

3.4 Statistical Model Selection

The purpose of this section is to describe the use of statistical models for use to evaluate the attributes selected above to verify the prediction or perception as statistically significant using the analysis of statistical methods shown in Figure 3-12.

132

Figure 3-12: Choosing an Analysis: Attribution Selection Impacts Statistical Method Selection (Minitab, 2010)

3.4.1 Linear or Logistic Regression

In statistical modeling, regression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (‘predictors’ or ‘factors’). Regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed. Simple linear regression is a statistical method that allows us to summarize and study relationships between two continuous (quantitative) variables:

 One variable, denoted x, is regarded as the predictor, explanatory, or

independent variable; and,

 The other variable, denoted y, is regarded as the response, outcome, or

dependent variable.

133

In a simple linear regression model, a single response measurement Y is related to a single predictor (covariate, regressor) X for each observation. The critical assumption of the model is that the conditional mean function is linear:

E(Y |X) = α + βX (8)

Simple linear regression gets its adjective "simple," because it concerns the study of only one predictor variable. In most problems, more than one predictor variable will be available. This leads to the following “multiple regression” mean function:

E(Y |X) = α + β1X1 + · · · + βpXp (9)

where α is called the intercept and the βj are called slopes or coefficients.

3.4.2 Analysis of Variance (ANOVA)

Analysis of Variance (ANOVA) is a collection of statistical models used to analyze the differences among group means and their associated procedures (such as "variation" among and between groups). The analysis of variance may be used as an exploratory tool to explain observations. ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes the t-test to more than two groups.

ANOVAs are useful for comparing (testing) three or more means (groups or variables) for statistical significance. It is conceptually similar to multiple two-sample t-tests.

Nordstokke and Zumbo wrote “Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances.” (Nordstokke & Zumbo, 2010)

134

3.4.3 Two-Sample t-Test

A two-sample t-test determines if the population means for two independent groups differ significantly or if the difference is due instead to random chance. Minitab wrote:

Use a two-sample t-test with continuous data from two independent random samples. Samples are independent if observations from one sample are not related to the observations from the other sample. The test also assumes that the data come from normally distributed populations. However, it is fairly robust to violations of this assumption when the size of both samples is 30 or more. (Minitab, 2010)

3.4.4 Data Qualification Methods

The following data qualification analysis steps: normality, Johnson Transformation,

Grubbs’ test for outliers, test for equal variances, and/or Chi-Squared test for independence may be used to provide a statistically significant basis for rejecting or failing to reject hypotheses.

3.4.4.1 Test for Normal Distribution

The Anderson-Darling test for normality is based on the following null and alternate hypotheses:

H(N): Data follows a normal distribution

H(N)A: Data does not follow a normal distribution

The Anderson-Darling test statistic, 퐴 is defined as:

퐴 = −푛 − 푆 (10)

Where:

2푖 −1 푆 = 푙푛퐹(푌 ) + 푙푛1− 퐹(푌 ) 푛

Where:

푛 = sample size

퐹 = cumulative distribution function (CDR) of the normal distribution

135

푌 = ordered observations

3.4.4.2 Johnson Transformation

The Johnson Transformation test statistic for lognormal distributions, 푍 is defined as:

 푍 =  +  푙푛 , 푋 >  (11) 

Where:

푍 = standard normal random variable.

 = scale parameter

 = shape parameter

 = scale parameter

 = location parameter

3.4.4.3 Grub’s test for Outliers

The Grubbs’ test identifies either much smaller or much larger data points outside dataset norm. Identification and removal of outlying data points is necessary to ensure accuracy of statistical testing and avoid incorrect conclusions. As the Grubbs’ test detects only one outliner at a time, detection of an outliner requires the outlier be removed from the data set and the test statistic rerun until no outliners are identified. The Grubbs’ test for outliers is based on the following null and alternate hypotheses:

H(G): There are no outliers in the data set

H(G)A: There is at least one outlier in the data set

The Grubbs’ test statistic is defined as:

| | G= …., (12)

Where

푁 = sample size

136

푌 = sample mean

푌 = ordered observations

푆 = standard deviation

3.4.4.4 Test for Equal Variances

Homogeneity of variance is a standard assumption of ANOVA and most statistical tests. The F-test is used to verify homogeneity/equality of variance thus allowing use of

ANOVA or other statistical tests with confidence. The F-test for equality of two variances is based on the following null and alternate hypotheses:

H(EV): 휎 = 휎 (푣푎푟푖푎푛푐푒푠 푎푟푒 푒푞푢푎푙)

H(EV)A: 휎 ≠ 휎 (푣푎푟푖푎푛푐푒푠 푎푟푒 푛표푡 푒푞푢푎푙)

The F-test test statistic, 퐹 is defined as:

퐹 = 푠 /푠 (13)

Where:

푠 = sample variance for sample one

푠 = sample variance for sample two

3.4.4.5 Chi-Squared Test for Independence

The chi-squared test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories. A chi-squared test, also written as χ2, is any statistical hypothesis test wherein the sampling distribution of the test statistic is a chi-squared distribution when the null hypothesis is true. Test statistics that follow a chi-squared distribution arise from an assumption of independent normally distributed data, which is valid in many cases due to

137

the central limit theorem. A chi-squared test can be used to attempt rejection of the null hypothesis that the data are independent.

3.5 Chapter Summary

Section 3.1, Research Design, presented the research context, methodological formulation, discussed alternatives considered, and presented plans for validation.

Section 3.2 introduced SEMDAM providing an overview, describing proposed use, and detailing the activities and decisions within SEMDAM. Section 3.3, Attribute Selection, provided insight into the selection of attributes via screening, characterization, and optimization experiments including the interdependencies with 3.4, Statistical Model

Selection, which described available statistical methods and associated data qualification methods.

138

4 SEMDAM Applied to an Empirical Case Study

If it can’t be measured, it isn’t science. Lord Kelvin

Checkland wrote “learning from books or lectures is relatively easy … but learning from experience is difficult for everyone.” (Checkland, 2000) White, Gandhi, Gorod,

Ireland, and Sauser wrote “case studies are important and a valuable means for understanding what works and doesn’t work in addressing the most difficult systems engineering problems” adding “they inform burgeoning theories such as those associated with complex systems engineering.” (White, Gandhi, Gorod, Ireland, & Sauser, 2013)

To demonstrate the feasibility of SEMDAM, this section uses the case study method to examine the feasibility of analyzing the COSP as a basis for recommending a complexity appropriate SEM for the US National HealthCare (USNHC). While

SEMDAM supports both aperiodic assessment or periodic situational awareness modeling, this case study presents an aperiodic assessment. Since SEMDAM has both predictive and retrospective elements, passage of time is simulated by evaluating current abductive logic statements as though they were conducted previously.

Section 4.1, USNHC – Empirical Case Study Overview, presents background on the judicial and legislative precedents that have shaped USNHC and introduces the evidence required for Section 4.2, Applying SEMDAM to USNHC Case Study. Section 4.3,

Summation of USNHC Case Study, contains conclusions and observations of USNHC and summary of the potential impacts of HIPAA legislation, specifically breach notification, regarding USNHC gained while performing this case study. Specific observations or considerations of the system engineering aspects of SEMDAM are presented in Chapter 5, Synthesis & Discussion.

139

4.1 USNHC – Empirical Case Study Overview

Much of the existing SE literature that describes USNHC is focused on design, development, and integration of new systems while much of the operational research focuses on compliance with an evolving requirements landscape. This section presents historic and modern requirements for privacy and confidentiality of medical information providing context for understanding the multifaceted landscape of self-imposed and directed executive, legislative, and judicial requirements that shape USNHC.

In the 5th century BC, the Hippocratic Oath defined the foundation for medical privacy for USNHC, “Whatever, in the course of my practice, I may see or hear (even when not invited), whatever I may happen to obtain knowledge of, if it be not proper to repeat it, I will keep sacred and secret within my own breast.” (Smith J. G., 1825) The oath remains relevant nearly 2,500 years later as nearly 100 percent of graduating medical school students still swear to uphold a modern version of the oath. (Tyson, 2001)

In 1603 Semayne’s case established the “knock and announce” rule which afforded physical protection to individuals and their immediate surroundings. Also called the

Castle doctrine, it is summarized as “Every man’s house is his castle.” (Semanye's Case,

1604) This thread of legal protection for individuals and their effects was codified in US law in the 1760s in the U.S. constitution’s 4th Amendment which states “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated.” (U.S. Bill of Rights, 1791) In an 1890

Harvard Law Review Justices Brandeis and Warren argued that the definition of “right of life” originally provisioned in the Castle Doctrine to protect the individual from physical harm should be broadened in scope to recognize a man’s spiritual nature including

140

feelings and his intellect. (Warren & Brandeis, 1890) The legal summary is regarded as the precedent for the definition of and expectation of privacy often summarized by and cited as “the right to be let alone.”

The legal framework for healthcare in the U.S. is derived from workers compensation legislation which came from Europe in the early 1900s which is reason U.S. healthcare, prior to ACA, was available from an employer. (Hadler, 2016) U.S. adoption of workers compensation legislation was based on the commerce clause in 1908 when the U.S.

Supreme Court ruled it could not be a national scheme, with the exception of Medicare, writing “Congress has power to regulate the relation of master {employer} and servant

{employee} to the extent that such regulations are confined solely to interstate commerce” adding “one engaging in interstate commerce does not thereby submit all his business to the regulating power of Congress.” (The Employers' Liability Cases, 1908)

When health insurance came about in the United States before World War 2 at the Baylor

Hospital it followed established legal precedent and became state based as well. (Hadler,

2016)

Modern federal data protection law is based on “fair information practice principles”

(FIPPs). (Cate, 2006) FIPPs reflected a wide consensus about the need for broad standards to facilitate both individual privacy and the promise of information flows in an increasingly technology-dependent, global society. The core FIPPS principles of: notice/awareness, choice/consent, access/participation, integrity/security, and enforcement/redress formed the basis for the Privacy Act, which Congress adopted in

1974. (93rd U.S. Congresss, 1974)

141

The Health Insurance Portability and Accountability Act (HIPAA), Public Law 104-

191, enacted on August 21, 1996 included Sec. 264, Recommendations with Respect to

Privacy of Certain Health Information, that directed the Secretary of Health and Human

Services (HHS) to develop recommendations for the rights an individual should have, procedures to exercise those rights once established and uses and disclosures that should be authorized or required. (104th U.S. Congress, 1996) Sections 261 through 264 of

HIPAA required the Secretary of HHS to publicize standards for the electronic exchange, privacy and security of health information. Collectively known as the Administrative

Simplification provisions they required the HHS Secretary to issue privacy regulations governing individually identifiable health information if Congress did not enact privacy legislation within three years of the passage of HIPAA. Congress did not act within the time limit thus forcing HHS to develop both Privacy and Security Rules via the Notice of

Public Rule Making process published in the Federal Register. HHS continued to evolve requirements and released a proposed rule for public comment on November 3, 1999

(HHS/ASPE, 1999) which resulted in the Privacy Rule in 2000 (HHS/ASPE, 2000). Later updates include the Security Rule in 2003 and a set of significant modifications in 2013.

(HHS/OCR, 2013; HHS/CMS, 2003)

Moskop et al., wrote most discussions of limiting access to patient information refer to ‘confidentiality’ not ‘privacy’” citing HIPAA as a notable exception because the rule consistently refers to the ‘privacy’ of health care information and only infrequently uses the term ‘confidentiality’.” (Moskop, Marco, Larkin, Gelderman, & Derse, 2005)

President Bush signed Executive Order 13335 in April 2004 “for the development and nationwide implementation of an interoperable health information technology

142

infrastructure to improve the quality and efficiency of health care” which created the

Office of the National Health Information Technology Coordinator with the mandate to

“develop, maintain, and direct the implementation of a strategic plan to guide the nationwide implementation of interoperable health information technology in both the public and private health care sectors that will reduce medical errors, improve quality, and produce greater value for health care expenditures.” (Bush, 2004)

American Recovery and Reinvestment Act (ARRA), Public Law 111-5 dated 17 Feb

2009, directed the Secretary of Health and Human Services (HHS) to “describe the specific actions that have been taken by the Federal Government and private entities to facilitate the adoption of a nationwide system for the electronic use and exchange of health information” further directing HHS to “invest in the infrastructure necessary to allow for and promote the electronic exchange and use of health information for each individual in the United States.” (111th U.S. Congress, 2009)

Health Information Technology for Economic and Clinical Health Act (HITECH), passed by Congress in 2009 as part of the ARRA, included Subpart A which promoted

Health Information Technology (HIT) by establishing the Office of the National

Coordinator for Health Information Technology (ONC), set strategic goals for availability of electronic health records (by 2014 for every person in the US), and provided governance for a national wide health information network. Subpart C, Grants and Loans Funding, defined a set of state and federal grants to promote adoption and use of health information technology infrastructure and certified EHR technology described in Sections 4.1.4.1, Federal Incentive Payments, and 4.1.4.2, Federal Grants to States.

The HITECH Act included new PHI breach notification requirements for private sector

143

covered entities, their business associates, and vendors of personal health records affecting more than 500 individuals. (111th U.S. Congress, 2009)

The Affordable Care Act (ACA) of 2010, frequently referred to as ‘Obamacare’, is the collective term for the Patient Protection and Affordable Care Act (PPACA) (Pub. L.

111–148), dated 23 Mar 2010, and the Health Care and Education Reconciliation Act

(HCERA). (Pub. L. 111–152) PPACA established a health insurance exchange (HIE) in each state and DC where individuals and small business could “shop” for health insurance coverage. (111th U.S. Congress, 2010) PPACA instructed each state to establish its own state-based exchange (SBE). However, if a state elected not to create an exchange or if the Secretary of Health and Human Services (HHS) determined a state was not prepared to operate an exchange, the law directed HHS to establish a federally facilitated exchange (FFE) in the state. According to federal regulations, state-based marketplaces are responsible for protecting and ensuring the confidentiality, integrity, and availability of marketplace enrollment information, and must also establish and implement certain privacy and security standards. (GAO, 2016)

Regardless of whether a state established and operated its own marketplace or used the federally facilitated marketplace, PPACA and HHS regulations and guidance require every marketplace to have capabilities that enable them to carry out four key functions shown in Figure 4-1: Functions Performed by the Various Types of Marketplaces. (GAO,

2016)

144

Figure 4-1: Functions Performed by the Various Types of Marketplaces (GAO, 2016) GAO analysis of Centers for Medicare & Medicaid Services data.

From 2009 through 2014, HHS/CMS led a $500 million development effort to build the FFE that involved 60 contractors to provide HealthCare.gov, the frontend portal, and backend systems called Federal Marketplace or Federal Data Services Hub. (HHS/OIG,

2014) shown in Figure 4-2: Healthcare.gov and Its Supporting Systems.

145

Figure 4-2: Healthcare.gov and Its Supporting Systems (HHS/OIG, 2014, p. 6) cites GAO analysis of Centers for Medicare & Medicaid Services

Healthcare.gov identifies the public facing portal while the Federal Data Services

Hub (FDSH) is a CMS system for exchanging information between Healthcare.gov and

CMS’s external partners, including other federal agencies, state-based marketplaces, other state agencies, other CMS systems, and issuers of Qualified Health Plans (QHPs).

(HHS/OIG, 2014) Despite providing an obvious service (data analysis and eligibility) for qualified health plans (QHPs), HHS/ONC ruled that exchanges were not acting on behalf of QHPs and were not HIPAA business associates subject to HIPAA’s Security, Privacy, and Breach Notifications requirements. (HHS, 2012, pp. 18,325) Therefore, none of the

316 incidents included in a GAO report (GAO, 2016) which involved PII and attempts by attackers to compromise part of the Healthcare.gov system between October 6, 2013 and

March 8, 2015 are included in the HHS/OCR Breach Portal. (HHS/OCR, 2016) GAO reported “Numerous significant security weaknesses have been identified in state-based marketplaces” adding “CMS officials do not require comprehensive annual testing or continuous monitoring of security controls.” (GAO, 2016)

146

Concurrent with states developing HIEs and HHS/CMS developing FFE, significant political debate about the legality of a federal mandate for U.S. federal healthcare which crystalized around the fact that the FFE was not “in the state.” This issue led to multiple lawsuits eventually reaching the U.S. Supreme Court in KING v. BURWELL. In October

2014, the Court found for BURWELL {i.e. USNHC was legal} writing “The upshot of all this is that the phrase “an Exchange established by the State under [42 U. S. C. §18031]” is properly viewed as ambiguous” noting that “The Affordable Care Act contains more than a few examples of inartful drafting.” (KING ET AL. v. BURWELL, SECRETARY

OF HEALTH AND HUMAN SERVICES, ET AL., 2014)

4.1.1 Evidence of SE&M or other Assessments

Classification of USNHC as an SoS exists in multiple scholarly articles. Grigoroudis and Phillis defined a system-of-systems approach to modelling healthcare systems for an entire country using a four level SoS hierarchy: family care, first-level healthcare, second-level healthcare and third-level healthcare when they wrote:

We presented a new methodology for modeling healthcare systems and devising strategies that improve the health level of a population. The proposed methodology was based on a SoS approach, which allows modeling the hierarchical structure of national healthcare systems, and provides analytical results for the component systems through simulations at each SoS level. (Grigoroudis & Phillis, 2013)

Wickramashinge, Chalasani, Boppana, and Madni described USNHC using Sage and

Cuppan’s definition of SoS as a basis for definition of a Healthcare System of Systems

(HSoS):

Healthcare systems are very diverse, distributed, and complex systems in nature. A healthcare system of systems (HSoS) can be defined as a collection of independent, large scale complex, distributed systems. HSoS exhibit operational and managerial independence, geographic distribution, and evolutionary development. An SoS perspective is imperative in order to realize network centric healthcare operations.

147

As both US and Europe move forward to incorporate e-health and electronic medical records or computerized patient records, the concept of healthcare systems of systems becomes more important if we are to fully realize the benefit of Information Communication Technology (ICT) use in healthcare. (Wickramasinghe, Chalasani, Boppana, & Madni, 2007)

De Laurentis, Dickerson, DiMarrio, Gartz, Jamshisi, Nahavandi, Sage, Sloane, and

Walker, wrote of the appropriateness of considering USNHC a system-of-systems:

The U.S. Secretary of Health has initiated the development of a National Healthcare Information Network (NHIN), with the goal of creating a nationwide information system that can build and maintain electronic health records (EHRs) for all citizens by 2014.

The NHIN will rely on a network of independent Regional Healthcare Information Organizations (RHIOs) that are being developed and deployed to transform and communicate data from the hundreds of thousands of legacy medical information systems presently used in hospital departments, physician offices, and telemedicine sites into NHIN-specified metaformats that can be securely relayed and reliably interpreted anywhere in the country.

The NHIN “network of networks” will clearly be a very complex SoS, and the performance of the NHIN and RHIOs will directly affect the safety, efficacy, and efficiency of healthcare in the U.S. (De Laurentis, et al., 2007)

4.1.2 Stakeholder Needs

This section documents the literature research results that identify stakeholder needs for a USNHC. The two primary needs frequently discussed in literature were improve quality and reduce costs. For example, MITRE wrote “the twin goals of improved health care and lowered health care costs will be realized only if health related data can be explored and exploited in the public interest, for both clinical practice and biomedical research. That will require implementing technical solutions that both protect privacy and enable data integration across patients.” (JASON - The MITRE Corporation, 2014)

Donabdeian provided the definition of improve quality used in this research when he defined quality of medical care as “The outcome of medical care, in terms of recovery, restoration of function, and of survival.” (Donabedian, 2005)

148

4.1.3 Statement of the Problem or Opportunity

Within USNHC, implementation if Health Information Technology (HIT) includes technical solutions for Healthcare professionals deploying Electronic Health Record

(EHR) – also called Electronic Medical Records (EMR) – or, States implementing exchange of health information by deploying Health Information Exchanges (HIE) or

SBEs. ANSI wrote “when the HIPAA Privacy statute was first enacted in 1996, most health information was on paper” adding “the use of electronic medical records (EMRs) among office-based physicians in the U.S. stood at 29.2% with only 12.4 % of physicians using minimally functional EMR systems.” (ANSI, 2012)

In 2009, the Department of Health and Human Services, through the Centers for

Medicare & Medicaid Services, initiated a federal program to fund hospitals and healthcare providers for deployment of health information technology (HIT) using a set of criteria called Meaningful Use. As of 2014, more than 403,000 healthcare professionals, approximately 75 percent of eligible professionals, have received incentive payments to adopt and meaningfully use certified EHR technology while over 4,500 hospitals, which is 92% of eligible institutions, have received similar incentive payments.

(HHS/ONC, 2014) Federal investments in EHRs were distributed to all 50 states and DC.

Under PPACA, Congress provided federal investment to states for the purpose standing up SBE. (111th U.S. Congress, 2010)

HHS views privacy to be an overriding factor in that it impacts all other intended benefits since “the entire health care system is built upon the willingness of individuals to share the most intimate details of their lives with their health care providers.”

(HHS/ASPE, 2000, p. 82467) Agaku, Adisa, Ayo-Yusuf, and Connolly wrote that patient

149

concern about privacy and security of medical information may be reducing the quality of health care provided as patients are unwilling to share information with physicians for fear it will not be kept private. (Agaku, Adisa, Ayo-Yusuf, & Connolly, 2014)

Barrows and Clayton wrote “one purpose of EMRs is to increase the accessibility and sharing of health records among authorized individuals” adding “when electronic records are used for research, valid epidemiological studies may be conducted using aggregates of nonidentifiable patient data.” (Barrows, Jr. & Clayton, 1996)

4.1.4 Sources of Data (Attributes)

The following sections describe the sources of data used in this researching including an overview identifying source(s), data description, data manipulation (if required), and data qualification (if required) steps for each of the attributes used in 4.2, Applying

SEMDAM to USNHC Case Study. Research into attributes began in 2013 and was completed in 2015. The analysis and conclusions contained in this dissertation represent a point in time analysis of the HHS/OCR Breach Portal covering 2009 through 2014 with the goal of providing a valuable system engineering perspective on USNHC.

4.1.4.1 Federal Incentive Payments to Unique Providers

Overview – The federal government provided incentive payments to 412,962 unique medical providers between May 2011 and December 2014 to promote the adoption and use of EHRs. Federal incentive payments to unique providers totaled: $18.8 billion in

Medicare EHR Incentive Program payments and $8.9 billion in Medicaid EHR Incentive

Program payments. (HHS/CMS, 2014) Jung, Unruh, Kaushal, and Vest wrote:

To receive payments through the Medicare incentive program, eligible professionals must demonstrate that they are making “meaningful use” of EHRs. The Medicaid incentive program differs from the Medicare program by requiring eligible professionals to meet minimum Medicaid patient volume thresholds or to practice in a

150

federally qualified health center or rural health clinic. (Jung, Unruh, Kaushal, & Vest, 2015)

Source Data Description – An excel table containing dollar mounts for federal incentive payments to unique providers via Medicare and Medicaid for eligible providers

(EP) and Hospitals; including a total investment per state.

Data Manipulation – Not required.

4.1.4.2 Federal Grants to States

Overview – The 2014 Congressional Research Service Report on Federal Funding for

Health Insurance Exchanges (HIEs) provided a summary of HIE grants by state and DC including: early Innovator, Planning, Level I, and Level II for a total of $4.9 billion to all

50 states and the District of Columbia (DC) to establish SBE. (Mach & Redhead, 2014)

The report indicated the planned type of exchange for each state as either a SBE or use of the federally funded exchange (FFE).

Source Data Description – A document in portable document format (pdf).

Data Manipulation – The state totals provided were transcribed into excel.

Establishing the threshold of $89 million as substantial investment, per state, was derived by analyzing grants to 14 states that successfully deployed state exchanges

(California, Colorado, Connecticut, Hawaii, Kentucky, Maryland, Massachusetts,

Minnesota, Nevada, New York, Oregon, Rhode Island, Vermont, Washington state, and

DC) and two states that accepted substantial federal investment but were eventually unsuccessful in attempts to deploy a state exchange (Illinois and New Mexico).

(Gilbertson, Tanju, & Eveleigh, 2017)

151

4.1.4.3 State Populations

Overview – The U.S. Census Bureau provides annual estimates of the populate for the United States, Regions, States, and Puerto Rico used in this research to calculate per capita measurements for other attributes allowing comparison between samples. (Yax,

2015)

Source Data Description – An excel table of the estimated number of resident population as of July 1 for years: 2010, 2011, 2012, 2013, and 2014.

Data Manipulation – Not required.

4.1.4.4 All Reported Breaches of PHI Data

Overview – A ‘breach’ of protected health information (PHI) is defined in the

HITECH act as “the unauthorized acquisition, access, use, or disclosure of PHI which compromises the security or privacy of such information, except where an unauthorized person to whom such information is disclosed would not reasonably have been able to retain such information.” (HHS, 2009) Once a breach of 500 individuals or more is discovered, the entity responsible for the breach is required to provide notice to prominent media outlets and the Secretary of Health and Human Services (HHS). (HHS,

2009) Responsible entities include: Healthcare Providers (HCP), Health Plans,

Healthcare Clearing Houses, or Business Associates.

The Office of Civil Rights (OCR) within the Department of Health & Human

Services (HHS) administers and enforces the HIPAA Privacy, Security, and Breach

Notification Rules which include requirements for timeliness, method, and content of notification of all breaches of protected health information (PHI) affecting more than 500 individuals. (HHS/ONC, 2015)

152

Source Data Description –During this research, the HHS/OCR Breach portal exhibited substantial changes and modifications such that a single source data description is not possible. The current HHS/ONC breach portal is available at https://ocrportal.hhs.gov/ocr/breach/breach_report.jsf. (HHS/OCR, 2016)

Data Manipulation – During this research, the HHS/OCR Breach Portal exhibited substantial administrative, technical, and business changes. Administrative changes occur when the reporting entity modifies one or more attributes of a single, previously reported breach notice. Technical changes occur when HHS/OCR modifies the reporting template, the display of reported breaches including changes to the number and type of reported attributes, modifications to underlying enumerated data types and migrations in underlying data storage technology. Business changes are either statutory, regulatory, or judicial and occur when the President or one of the Executive Departments (e.g., DoJ,

HHS, FTC, DoD), Congress, or the Supreme Court modifies one or more of the Privacy,

Security, or Breach Notification requirements.

Repeated observations of the HHS/OCR Breach Portal over extended periods of time demonstrated that in order to obtain a comprehensive list of PHI breaches a longitudinal study was required. Details of that longitudinal study are included in Appendix A –

COBPS. The output from the longitudinal study is a consolidated OCR breach portal summary (COBPS) data set of PHI breaches that contain the following reported or derived data elements per reported PHI Breach:

 Name of Responsible Entity;

 Type of Responsible Entity – defined as Healthcare Providers (HCP), Health

Plans, Healthcare Clearing Houses, or Business Associates;

153

 State of Responsible Entity;

 Number of Individuals Affected;

 Date of Breach;

 Kind of Breach – defined as either Electronic or Physical; and,

 Nature of Breach – defined as either Malicious or Negligent.

COBPS, implemented in excel, allows for sorting and filtering of the breaches and provide extracts as required. HHS/OCR introduced a substantial change to the mission or business requirements in Jan 2013 with the additional requirement that business associates be subject to HIPAA reporting. (HHS/OCR, 2013) To accurately allocate reported PHI breaches to the actual responsible state, this change required analysis of each reported PHI breach to determine if it occurred before or after 23 September 2013 and, if a business associate was involved, what state and state laws were in effect for that business associate allowing for the following two breach summaries:

Sum of All Breaches per state = AB(State) (14)

Where:

AB(State) = [퐵푟푒푎푐ℎ (푖, 푦)] ;and,

Sum of HCP Breaches per state=HCPB(State) (15)

Where:

HCPB(State) = [퐻퐶푃 퐵푟푒푎푐ℎ (푖, 푦)]

154

Where:

푛 = 51 (50 States & DC)

Y = 2009 through and including 2014

A per capita reported PHI breaches per state was determined for All Breaches and

HCP Breaches calculated using the State Populations as follows:

AB/M = AB(State)/State Population∗1,000,000 (16)

HCPB/M = HCPB(State)/State Population∗1,000,000 (17)

4.1.4.5 All Reported Breaches of PHI Data by State – AB(State)

Overview – COBPS contains All Reported Breaches of PHI Data by State.

Source Data Description – See Section 4.1.4.4.

Data Manipulation – Not required.

4.1.4.6 AB(State) normalized by State Population – (AB/M)

Overview – AB/M provides a single number per state or DC for the number of all reported breaches from 2009 through 2014 divided by the state population as of June

2014. AB/M supports statistically significant comparisons between various combinations of attributes of interest aggregated by state.

Source Data Description – List of values ranging from 0.70 to 14.96.

Data Manipulation – AB/M was derived by dividing AB(State), Section 4.1.4.5, by

State Population, Section 4.1.4.3.

4.1.4.7 HCPB(State) – HCP Reported Breaches by State

Overview – COBPS contains HCP Reported Breaches of PHI Data by State.

Source Data Description – See Section 4.1.4.4.

Data Manipulation – Not required.

155

4.1.4.8 HCPB/M – HCPB(State) normalized by State Populations

Overview – HCPB/M provides a single number per state or DC for the number of reported breaches by HCPs from 2009 through 2014 divided by the state population as of

June 2014. HCPB/M supports statistically significant comparisons between various combinations of attributes of interest aggregated by state.

Source Data Description – List of values ranging from 0.61 to 4.98.

Data Manipulation – HCPB/M was derived by dividing HCPB(State), Section 4.1.4.7, by State Population, Section 4.1.4.3.

4.2 Applying SEMDAM to USNHC Case Study

This section contains the step by step application of SEMDAM to USNHC. Since

SEMDAM contains branching, looping or jumping statements, each discrete case study task is named “Task X (Step Y) where Step Y is reference to the Section 3 generic

SEMDAM step from Section 3.2, SEMDAM Introduction & Systematic Description.

4.2.1 Task 1 (Step 1) – Gather Evidence of SE&M Activity

Overview – SEMDAM begins when requested by an authorized PM/SEM.

Input – See Section 4.1.1, Evidence of SE&M or other Assessments.

Process – Per Section 3.2.1, Step 1 – Gather Evidence of SE&M Activity.

Output – Evaluation of external SE&M assessments confirms that evidence of previous SE&M activity exists.

4.2.2 Task 2 (Step 2) – Evaluate Hypothesis ABD1 – COSP is Disorder

Overview – Evaluate evidence of consideration, use or attempted use of systems engineering and management (SE&M) methodologies for USNHC.

156

Input – Evidence of SE&M activity for USNHC from Section 4.1.1, Evidence of

SE&M or other Assessments. Candidate Expressions: Disorder, Known, Knowable,

Complex, Chaos.

Process – Evaluate H(ABD1) where:

H(ABD1): USNHC(COSP) is Disorder

H(ABD1)A: USNHC(COSP) is not Disorder

Output – Based on Evidence of SE&M activity from 4.2.1, reject H(ABD1) and infer that USNHC(COSP) is not Disorder.

4.2.3 Task 3 (Step 3) – IV&V Program & Systems Engineering Management

Overview – Perform IV&V of previous Program and Systems Engineering

Management activities. Obtain a validated set of stakeholders needs for the USNHC SOI.

Input – Reject H(ABD1). External information on USNHC Stakeholder Needs from

Section 4.1.2, Stakeholder Needs.

Process – Analyze and validate USNHC stakeholder needs per Section 3.2.3, Step 3 –

IV&V Program and SE Management. Set assumed COSP: Known.

Output – Validated Stakeholder Needs include: Improve health care and lower health care costs.

4.2.4 Task 4 (Step 4) – IV&V Business or Mission Analysis

Overview – Perform IV&V of previous Business or Mission Analysis (BMA) activities. Develop a validated statement of the problem or opportunity for USNHC SOI.

Input – Reject H(ABD1). Validated Stakeholder Needs. External information on

USNHC Problem or Opportunity from Section 4.1.3, Statement of the Problem or

Opportunity.

157

Process – Analyze and validate USNHC problem or opportunity description per

Section 3.2.4, Step 4 – IV&V Business or Mission Analysis.

Output – Validated Problem or Opportunity: Deploy Health Information Technology

(HIT) that protects privacy and enables data integration.

4.2.5 Task 5 (Step 5) – Obtain Evidence for Cause and Effect Analysis

Overview – Assuming COSP is Known, implies that USNHC is operating in “the domain of established fact, we believe we fully comprehend the relationship between pieces of information in this space and are confident we can make predictions based on our understanding.” (Gardner, 2013)

Input – Reject H(ABD1). Validated Stakeholder Needs. Validated Problem or

Opportunity.

Process – Identify potential attributes and associated statistical models per Section

3.2.5, Step 5 – Obtain Evidence for Cause and Effect Analysis. Subtasks include: make a valuable and nontrivial prediction related to validated stakeholder needs and validated problem or opportunity; identify measurable attributes for both predictor and response variables including a data set that contains the attributes; identify an appropriate statistical model based on selected attributes; qualify data to meet statistical model assumptions; and, evaluate results.

Basic cost/benefit analysis of automation for information technology suggests that one of the expected benefits of automating a SOI with a Known COSP is a reduction in negative outputs or outcomes. While all breaches over 500 are considered a negative outcome, larger reported PHI breaches may receive significant public attention potentially affecting the HCP’s business reputation. Since the COSP of USNHC is

158

assumed Known, automation, in this case, HCP implementation of EHRs should reduce the number and size of HCP reported breaches.

Prediction – HCP automation of EHRs makes for a more secure HCP, resulting in fewer and smaller reported HCP breaches based on reported HCP breaches that involve electronic PHI data.

Source of Data – Section 4.1.4.4, All Reported Breaches of PHI Data, scoped to reported PHI breaches by HCPs.

Predictor Variable – Kind of Breach (Electronic or Physical) (categorical).

Response Variable – Individuals Affected (IA) per Breach by HCP (continuous).

Statistical Model Test – Two-sample t-test using the following null and alternative hypothesis:

H(IA): There is no difference in the size of HCP reported breaches based

on the kind of media of the medical information media; and,

H(IA)A: There is a statistically significant difference in the size of HCP

reported breaches based on the kind of media used to store medical

information.

Performing qualification and significance testing of this hypothesis provide the following results for H(IA):

 Check IA distribution (Para 3.4.4.1); (α = 0.05) Reject normal distribution

hypothesis (N = 733, AD = 242.472; p-value <0.005);

 Perform Johnson Transformation of IA (Para 3.4.4.2) to obtain IAJT;

 Check IAJT distribution (Para 3.4.4.1); Fail to reject normal distribution

hypothesis (N = 733; AD = 0.520; p-value = 0.186);

159

 Perform Grubbs’ test for outsiders for IAJT (Para 3.4.4.3); (α=0.05; N=733)

Reject hypothesis of no outliers; 1 outlier removed;

 Perform Grubb’s test for outliers of IAJT (Para 3.4.4.3); (α = 0.05; N = 732);

Fail to reject hypothesis of no outliers (G = 3.41; p-value = 0.447);

 Check IAJT distribution with 1 sample removed (Para 3.4.4.1); (α = 0.05); Fail

to reject normal distribution hypothesis (N = 732; AD = 0.407; p-value =

0.349); and,

 Check F-Test for equal variances of IAJT (Para 3.4.4.4); (α = 0.05; N = 732)

Fail to reject hypothesis all variances are equal (F = 1.28; p-value = 0.064).

Test Results: After verifying IAJT is a normal distribution with no outliers and equal variances, performing a two-sample t-test for IAJT of PHI breaches by HCPs results in rejecting H(IA)O (p value of .042) thus concluding that there is a statistically significant difference between the mean number of IA for electronic breaches and IA for physical breaches. Of the 733 breaches studied, 575 were electronic while 157 were physical.

Analysis of reported breaches by HCPs shows electronic breaches are larger and more frequent than physical breaches. This counter-intuitive response suggests that HCP automation of EHRs did not result in the reduction of the number or severity of HCP reported PHI breaches.

Output – Evidence to Reject H(IA).

4.2.6 Task 6 (Step 6) – Evaluate Hypothesis ABD2 – COSP is Known

Overview – Evaluate the hypothesis ABD2 using evidence obtained in Section 4.2.5,

Task 5 (Step 5) – Obtain Evidence for Cause and Effect Analysis.

160

Input – Reject H(ABD1). Evidence from 4.2.5, Task 5 (Step 5) – Obtain Evidence for

Cause and Effect Analysis. Candidate Expressions: Known, Knowable, Complex, Chaos.

Process – Evaluate H(ABD2) where:

H(ABD2): USNHC(COSP) is Known

H(ABD2)A: USNHC(COSP) is not Known

Output – Based on Evidence to Reject H(IA) from 4.2.5, reject H(ABD2) and infer that USNHC(COSP) is not Known. Assume USNHC(COSP) is Knowable.

4.2.7 Task 7 (Step 5) – Obtain Evidence for Cause and Effect Analysis

Overview – Assuming USNHC(COSP) is Knowable, implies that USNHC is operating in the domain where a prior prediction of cause and effect requires special investigation or expert knowledge. IEEE described the need for external assistance or special investigation when they wrote “In an SoS, the owners of the constituent systems usually retain responsibility for engineering their systems” adding that the design process

“involves collaboration with the constituent systems who will conduct their own design trades to identify the approach to address SoS requirements as they apply to their system.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 104) INCOSE wrote “one of the objectives of the SE process is to minimize undesirable consequences” which “can be accomplished through the inclusion of and contributions from experts across relevant disciplines.” (INCOSE SEH, 2015, p. 12)

Input – Reject H(ABD1) & H(ABD2). USNHC(COSP) assumed to be Knowable.

Validated Stakeholder Needs. Validated Problem or Opportunity. Access to Expert

Knowledge and/or Results of Special Investigation.

161

Process – Identify potential attributes and associated statistical models per Section

3.2.5, Step 5 – Obtain Evidence for Cause and Effect Analysis that allows for a priori prediction requiring special investigation or expert knowledge. Subtasks include: make a valuable and nontrivial prediction, aided by special study or expert assistance, related to validated stakeholder needs and validated problem or opportunity; identify measurable attributes for both predictor and response variables including a data set that contains the attributes; identify an appropriate statistical model based on selected attributes; qualify data to meet statistical model assumptions; and, evaluate results.

A tide raises all boats is an observation that a changing environment affects objects in that environment regardless of that object’s individual intent. Section 4.1.4.1, Federal

Incentive Payments to Unique Providers, contains evidence of federal incentives for medical professionals where each individual medical professional, medical practice or hospital independently accepted or did not accept federal EHR incentives. Section

4.1.4.2, Federal Grants to States, contains evidence of federal grants for states to establish a SBE. States have a more complicated governance structure than unique providers. Led by a Governor, each state is guided by a state legislature and regulated by state courts.

State acceptance of federal funds initiates state based legislative and legal debate requiring special investigation and/or expert knowledge based on the legal principle of subsidiarity.

Bellia wrote to “assess the role of states as privacy law innovators, we need to consider the interplay of state and federal developments” adding “scholars defend the presumption of decentralization … for discussions of federalism …based on the principle of ‘subsidiarity’” which is the principle that “regulation should occur at the lowest level

162

of capable government.” (Bellia, 2009) Preemption by state law allows states to create laws that are more exacting than HIPAA; however, they may not relax any of the HIPAA requirements. (McGraw, Leiter, & Rasmussen, 2013) Bellia wrote “state-level privacy initiatives are likely to have the most influence on the shape of federal law when they fill an information privacy gap rather than following federal action” adding “state laws can provide possible models for federal regulation.” (Bellia, 2009) Bellia provides the following example:

Consider the status of California’s motor vehicle emissions standards before Congress gave the federal government regulatory authority over air pollution and preempted all state standards but California’s. {See Air Quality Act of 1967, Public Law 90-148} If California adopts the nation’s highest emissions standards, automobile manufacturers that wish to serve the California market are forced to produce a car that is compliant with California’s standards. (Bellia, 2009)

USNHC contains PHI protected by a patchwork of federal statutes, regulations, and state laws that sometimes conflict with one another. (Dimitropoulos & Rizk, 2009)

Considering that individual state laws impact USNHC behavior suggests that expert knowledge in legal and legislative areas may be beneficial to reducing breaches per capita. Sixteen (16) states and the District of Columbia accepted substantial federal grants to create an SBE. (Mach & Redhead, 2014)

Potential benefits of significant financial investment for automation of a large scale

SOI include increased awareness of the problem – due in part to the knowledge obtained via special investigation or interacting with experts – and improved results via the

Hawthorne effect. Accepting a federal grant to establish a SBE should increase awareness of HIPAA requirements for state legislatures, state courts, and medical entities operating in that state resulting in a more secure state operating environment as measured by average per-capita reported PHI breaches.

163

Prediction – States that accepted federal grants to establish a state exchange will have a lower average number of all reported PHI breaches per capita.

Sources of Data – Section 4.1.4.2, Federal Grants to States. Section 4.1.4.3, State

Populations. Section 4.1.4.4, All Reported Breaches of PHI Data.

Predictor Variable – State/District acceptance of substantial federal funding to establish a SBE (categorical).

Response Variable – AB/M (continuous).

Statistical Model Test – Acceptance/non-acceptance of federal grant money to states allows use a two-sample t-test to compare AB/M between states that accepted substantial federal investment to establish an SBE with states that did not as follows:

H(AB/M): There is no difference in the means of AB/M between states based

on acceptance of substantial federal grants to establish an SBE;

and,

H(AB/M)A: There is a statistically significant difference in the means of AB/M

by state based on acceptance of substantial federal grants to

establish an SBE.

Performing qualification and significance testing of this hypothesis provide the following results for H(AB/M):

 Check AB/M distribution (Para 3.4.4.1); (α = 0.05)

Reject normal distribution hypothesis (N = 51, AD = 2.172; p-value <0.005);

 Perform Johnson Transformation of AB/M to obtain AB/MJT (Para 3.4.4.2);

 Check AB/MJT distribution (Para 3.4.4.1); (α = 0.05); Fail to Reject normal

distribution hypothesis (N = 51; AD = 0.147; p-value = 0.964);

164

 Perform Grubbs’ test for outsiders for AB/MJT (Para 3.4.4.3); (α = 0.05);

Fail to Reject hypothesis of no outliers (N = 51; G = 2.91; p-value =9.125);

and,

 Perform Test for equal variances AB/MJT (Para 3.4.4.4); (α = 0.05; N = 50):

Fail to reject hypothesis that variances are equal (F = 0.57; p-value = 0.165).

Test Results: After verifying AB/MJT is a normal distribution with no outliers and equal variances, performing a two-sample t-test for AB/MJT of results in rejecting

H(AB/M) (T-value of -1.93 and p-value of 0.059) thus concluding that there is a statistically significant difference in the means of AB/M by state/district based on acceptance of substantial federal grants to establish an SBE.

Output – Evidence to Reject H(AB/M).

4.2.8 Task 8 (Step 7) – Evaluate Hypothesis ABD3 – COSP is Knowable

Overview – SEMDAM looks for statistically significant evidence of a priori prediction of cause and effect requiring special investigation or expert knowledge.

Input – Reject H(ABD1) & H(ABD2). Evidence to Reject H(AB/M). Candidate

Expressions: Knowable, Complex, and Chaos.

Process – Evaluate H(ABD3) where:

H(ABD3): USNHC(COSP) is Knowable

H(ABD3)A: USNHC(COSP) is not Knowable

Output – Based on Evidence to Reject H(AB/M) from 4.2.7, reject H(ABD3) and infer that USNHC(COSP) is not Knowable. Assume USNHC(COSP) is Complex.

165

4.2.9 Task 9 (Step 5) – Obtain Evidence for Cause and Effect Analysis

Overview – There is a great deal of systems engineering literature available reflecting a growing understanding that engineering complex systems requires identifying and adapting to emergent behaviors. Much of the existing systems engineering literature uses the concept of emergent behaviors as a potential indicator of system complexity. Rather than use the presence of emergent behaviors to validate system complexity, after rejecting COSPs of Disorder, Known, and Knowable, assume that USNHC(COSP) is complex and focus on statistically significant a posteriori perception of trends or relationships between attributes of interest.

Assuming USNHC(COSP) is Complex implies that USNHC is operating in the domain where a priori prediction of cause and effect is not possible. DeRosa wrote “in development of complex systems, we encounter… outcomes that simply cannot be envisioned a priori” adding “dealing with these unknown unknowns becomes a planning process that is executed not before the fact, but rather, as the opportunities and risks emerge.” (DeRosa, Introduction, 2011, p. 25)

An individual’s perception of and trust in the USNHC will depend on their interactions and resultant USNHC reaction(s). This research focused on the two primary interactions between an individual (either as a consumer or a patient) and USNHC shown in Figure 4-3: Two Primary Individual Interactions with USNHC: Acquire Medical

Coverage & Obtain Medical Assistance. The two primary interactions are:

 As a consumer, an individual provides Personally Identifiable Information

(PII) to acquire healthcare coverage from either a SBE or the federal

marketplace (FM); and,

166

 As a patient, an individual provides PHI to obtain medical assistance from

their HCP.

As shown in Figure 4-3, USNHC components within a state include:

 access to the federal marketplace (FM);

 covered entities (healthcare providers, health plans, and healthcare

clearinghouses);

 business associates; and,

 optionally, a SBE.

Figure 4-3: Two Primary Individual Interactions with USNHC: Acquire Medical Coverage & Obtain Medical Assistance

Review and analysis of Figure 4-3 raises the following question: if there are only two primary interactions between an individual and USNHC, why are there two distinct data protection standards specified? (e.g., PII vs. PHI) Answering that question provides

167

support to the assumption that COSP of USNHC is complex and provides insight into the business and mission changes within USNHC.

Commercial covered entities: Healthcare Providers (HCP), Health Plans, Healthcare

Clearinghouses, and Business Associates are subject to HIPAA, managed by HHS/OCR, which is the primary source of requirements for privacy, security, and breach notification for PHI. While HIPAA applies to covered entities and their business associates, it does not apply to state based exchanges or the federal marketplace. In 2013, HHS/OCR verified that the HIPAA data protection standard was PHI and not PII when they wrote:

For clarity and consistency, we also proposed to change the term ‘‘individually identifiable health information’’ in the current definition of ‘‘business associate’’ to ‘‘protected health information,’’ since a business associate has no obligation under the HIPAA Rules with respect to individually identifiable health information that is not protected health information. (HHS/OCR, 2013, pp. 5,574)

The Federal Marketplace, managed by HHS/CMS, contains Personally Identifiable

Information (PII), which includes patient names, social security numbers, physical addresses, financial information, and health plan information which is not subject to

HIPAA’s security, privacy or breach notification requirements. In the same year 2013,

HHS/CMS verified that PII, not PHI was their data protection standard when they wrote:

We considered but declined to use the definitions for these terms provided under the HIPAA regulations because the protected health information (PHI) that triggers the HIPAA requirements is considered a subset of PII, and we believe that the HIPAA definitions would not provide broad enough protections to satisfy the requirements under the Privacy Act of 1974 (5 U.S.C. 552a), the e-Government Act of 2002 (Pub. L. 107–347), other laws to which HHS is subject, or the expectations of the other Federal agencies that will be providing PII to facilitate Exchange eligibility determinations. (HHS/CMS, 2013, pp. 37,049)

It is difficult to imagine a situation where any requirement from the Privacy Act, the e-Government Act, or other law would apply to HHS in such a way to only impact

HHS/CMS and not HHS/OCR.

168

There are potentially 19 unique data security standards in place – one for each state or

DC that has a SBE plus PHI and PII. GAO wrote, “States establishing their own marketplaces are responsible for securing the supporting information systems to protect sensitive personal information they contain.” (GAO, 2016) Russom, Sloan, and Warner wrote “variation in state law definitions shows that there is little or no general agreement on what information should be legally protected.” (Russom, Sloan, & Warner, 2011)

Breach or Incident notification requirements vary substantially as well. HHS/OCR wrote “Section 13402 of the HITECH Act requires HIPAA covered entities to provide notification to affected individuals and to the Secretary of HHS following the discovery of a breach of unsecured protected health information” because “the Act requires the

Secretary to post on an HHS Web site a list of covered entities that experience breaches of unsecured protected health information involving more than 500 individuals” where the notification is expected “without unreasonable delay but in no case later than 60 calendar days from the discovery of the breach.” In contrast, HHS/CMS wrote “in the event of an incident or breach, the entity where the incident or breach occurs would be responsible for reporting and managing it according to the entity’s documented incident handling or breach notification procedures” adding “FFEs, non-Exchange entities associated with FFEs, and State Exchanges must report all privacy and security incidents and breaches to HHS within one hour of discovering the incident or breach.” (HHS/CMS,

2013, pp. 37,049) Breaches of SBE and PII, regardless of number of individuals affected, are not included in the HHS/OCR Breach Portal nor are they publicly available.

One of the driving requirements for establishing an SBE was access to federal tax credits for individuals who purchase health care within a state. With the 2015 U.S.

169

Supreme Court ruling in King v. Burwell (KING ET AL. v. BURWELL, SECRETARY

OF HEALTH AND HUMAN SERVICES, ET AL., 2014), states that did not create their own exchange were entitled to offer federal incentives for residents of their state. With that substantial change to the mission requirements, SBEs became optional. Evidence that the state retains the authority to decide on use of a SBE or the FFE was provided by

Kentucky Governor Matt Bevin when he announced plans to, “wind down kynect and transition Kentucky to the federal site” adding that his goal is to, “eliminate the redundancy of Kentucky’s online health exchange.” (Yetter, 2016)

Rebovich wrote “when networked systems are individually adapting to both technology and mission changes, then the environment for any given system or individual becomes essentially unpredictable” adding “this new complexity is not a consequence of the interdependencies that arise when large numbers of systems are networked together to achieve some collaborative advantage.” (Rebovich Jr., Systems Thinking for the

Enterprise, 2011, p. 33) Rebovich continued “the combination of large-scale interdependencies and unpredictability creates an environment that is fundamentally different from that of the system or system of systems (SoS) level” adding, “systems engineering success expands to encompass not only success of an individual system or

SoS, but also the network of constantly changing systems.” (Rebovich Jr., Systems

Thinking for the Enterprise, 2011, p. 33)

Input – Historical information on Reported PHI Breaches by HCPs by state.

Process – We look at past HCP reported PHI breaches by state to determine if is it possible to identify or perceive a statistically significant a posteriori trend or finding. The analysis above suggests that the largest volume of technology and mission changes

170

occurs at a HCP operating in a state that accepted substantial federal grants to establish a

SBE. That HCP would be required to meet all federal PHI and PII requirements and potentially additional unique state based data security standards, while adhering to unique additional state requirements for individual access to EHR PHI data.

Source of Data – Section 4.1.4.7, HCPB(State) – HCP Reported Breaches by State.

Input Variable – State/District acceptance of substantial federal funding to establish a SBE (categorical).

Response Variable – HCPB/M (continuous).

Statistical Method – Two-sample t-test with the following hypothesis:

H(HCPB/M): There is no difference in the means of HCPB/M between states

based on acceptance of substantial federal grants to establish an

SBE in that state; and,

H(HCPB/M)A: There is a statistically significant difference in the means of

HCPB/M by state based on acceptance of substantial federal grants

to establish a SBE.

We performed the following data conditioning steps:

 Check HCPB/M distribution (Para 3.4.4.1); (α = 0.05); Fail to Reject normal

distribution hypothesis (N = 51, AD = 0.481; p-value = 0.223);

 Perform Grubbs’ test for outliers for HCPB/M (Para 3.4.4.3)’ (α = 0.05);

Fail to Reject hypothesis of no outliers (N = 51; G = 2.33; p-value = 0.867);

 Perform F-Test for equal variances for HCPB/M (Para 3.4.4.4);

(α = 0.05) Fail to reject hypothesis all variances are equal (N = 51; F = 0.55;

p-value = 0.139).

171

Test Results: Having verified that HCPB/M is a normal distribution with no outliers and equal variances, performing a two-sample t-test for HCPB/M results in a T-value of -

2.07 and a p-value of 0.044. Therefore, reject H(HCPB/M) and conclude there is a statistically significant difference in the population means of HCPB/M indicating there is sufficient evidence to suggest that a state’s decision to accept significant federal grants to establish a SBE may have an effect on PHI breaches by HealthCare Providers (HCP)s in that state.

Output – Evidence to reject H(HCPB/M).

4.2.10 Task 10 (Step 8) – Evaluate Hypothesis ABD4 – COSP is Complex

Overview – SEMDAM looks for statistically significant evidence of a posteriori perception of cause and effect.

Input – Reject H(ABD1), Reject H(ABD2) & Reject H(ABD3). Evidence to reject

H(HCPB/M). Candidate Expressions: Complex and Chaos.

Process – Evaluate H(ABD4) where:

H(ABD4): USNHC(COSP) is Complex

H(ABD4)A: USNHC(COSP) is not Complex

Output – Based on evidence of a posteriori perception {e.g., Reject H(HCPB/M)} from Section 4.2.9, fail to reject H(ABD4) and infer that USNHC(COSP) is Complex.

4.2.11 Task 11 (Step 10) – Recommend SEM based on Inferred COSP

Overview – SEMDAM provides a recommendation for a COSP appropriate SEM based on measuring statistically significant association for a priori prediction of system outcomes or a posteriori perception of system response.

Input – Inferred USNHC(COSP) is Complex.

172

Process – Using the input of an inferred COSP is Complex, lookup of the associated complexity appropriate SEM from Table 2-11: Proposed Alignment Between Inferred

COSP and Complexity Appropriate SEM results in the recommendation of ESM as the complexity appropriate SEM for USNHC.

Output – Recommend ESM as SEM for USNHC.

4.3 Summation of USNHC Case Study

This empirical case study of SEMDAM applied to the USNHC demonstrated the methodology presented in Chapter 3, SEMDAM Methodology. Observations and considerations of the system engineering aspects of the methodology are described in

Chapter 5, Synthesis & Discussion. Case study centric findings and observations of

USNHC gained during execution of this empirical case study are included below.

4.3.1 ‘Striking a Balance’ between ‘Uses and Disclosures’

HIPAA was created in 1996 to create “standards for transactions, and data elements for such transactions to enable health information to be exchanged electronically” listing the original goals of “improving the operations of the health care system and reducing administrative costs.” (104th U.S. Congress, 1996, p. 91) HHS/OCR wrote:

A major goal of [HIPAA] is to assure that individuals’ health information is properly protected while allowing the flow of health information needed to provide and promote high quality health care and to protect the public’s health and well-being. The Rule strikes a balance that permits important uses of information and disclosures that need to be addressed while protecting the privacy of people who seek care and healing. (HHS/OCR, 2003)

The original HIPAA statute dealt only tangentially with health information privacy through the adoption and standardization of electronic health records. (Wilkes, 2014)

Requirements were codified by Congress in Subtitle F, Administration Simplification,

Section 1177, Wrongful Disclosure of Individually Identifiable Health Information, as:

173

‘SEC. 1177. (a) OFFENSE.—A person who knowingly and in violation of this part— ‘‘(1) uses or causes to be used a unique health identifier; ‘‘(2) obtains individually identifiable health information relating to an individual; or ‘‘(3) discloses individually identifiable health information to another person, shall be punished as provided in subsection (b). (93rd U.S. Congresss, 1974)

“Wrongful Disclosure” requirements from Subtitle F grew into HIPAA’s privacy, security, and transaction and code sets rules. Moore, et al., wrote

The statute assigned enforcement of Subtitle F to the Department of Health and Human Services (HHS), which proceeded to establish standards for the storage and transmission of electronic health data. In carrying out this task, however, HHS officials realized that not only did health care, as an industry, lack standards for adequate storage and transmission of electronic health data, but there were very few prevailing laws or rules that could adequately protect such data. Recognizing the importance of securing the public's confidence in both the privacy and security of health data, HHS promulgated a set of uniform Privacy and Security Rules to establish minimum requirements for appropriate use and protection of health information. Thus, the HIPAA regulations are not the result of a direct Congressional statutory command but arise from a fairly broad interpretation of the statute by the implementing agency. (Moore, et al., 2007)

The following sections provide an assessment of USNHC measured against the two original goals of improving the operations of the health care system and reducing administrative costs followed by the USNHC Empirical Case Study Summary and

Conclusions.

4.3.2 Improving the Operations of the Health Care Systems

The following potential benefits of USNHC were presented to Congress in 2014:

There is a growing consensus in the biomedical community, especially at the administrative level, that the appropriate use of EHRs and {Health Information Exchange} HIEs could lead to improved health outcomes overall, and help to lower health care costs in the long term. The movement towards EHRs (and their exchange via HIEs and other mechanisms) enjoys support from many funding and regulatory agencies, health care providers, health entrepreneurs, and other stakeholders within the biomedical community. (MITRE Corporation, 2014, p. 14)

Appari and Johnson wrote, “{information technology} IT advances and their adoption in healthcare are more likely to improve care provision quality, reduce costs, and advance

174

medical science” warning that “this evolution has increased the potential for information security risks and privacy violations.” (Appari & Johnson, 2010)

Squires and Anderson wrote “despite its heavy investment in health care, the U.S. sees poorer results on several key health outcome measures such as life expectancy and the prevalence of chronic conditions.” (Squires & Anderson, 2015) Stein spoke “One of the fundamental ways scientists measure the well-being of a nation is tracking the rate at which its citizens die and how long they can be expected to live” adding “The overall

U.S. death rate has increased for the first time in a decade.” Stein described the increased

U.S. death rate “led to a drop in overall life expectancy for the first time since 1993, particularly among people younger than 65” adding “Whatever the cause, the trend is concerning, especially when the death rate is continuing to drop and life expectancy is still on the rise in most other industrialized countries.” (NPR Morning Edition, 2016)

4.3.3 Reducing Administrative Costs

The United States spends more on healthcare services than any other country in the world. In 2014, the United States spent 17.14 percent of its Gross Domestic Product

(GDP) or $2.973 trillion (US) on healthcare. This was both the highest percentage of

GDP and largest amount of money spent. (World Bank, 2016)

Squires and Anderson wrote “Health care spending in the U.S. far exceeds that of other high-income countries” adding that in 2013 “the U.S. spent 17.1 percent of its gross domestic product (GDP) on health care. This was almost 50 percent more than the next- highest spender (France, 11.6%) and almost double what was spent in the U.K. (8.8%).”

Healthcare spending in the U.S. averaged $9,086 per person in 2013. (Squires &

Anderson, 2015)

175

4.3.4 USNHC Empirical Case Study Summary & Conclusions

There is no evidence to suggest that additional privacy, security or notification requirements resulted in fewer breaches being reported nor is there evidence of the effectivity of breach reporting – this research assumed that all PHI breaches that occurred were reported. There is evidence; however, that the nature of breach (e.g., malicious or negligent) impacts the time it takes a covered entity to report a PHI breach. Using data from Section 4.1.4.4, All Reported Breaches of PHI Data, a two-sample t test was run on

733 breach notifications to determine if there were differences in time to report (in days) between malicious breaches or negligent breaches. Mean time to report for malicious breaches (95) and negligent breaches (169) was statistically significantly different (p =

0.005).

Reluctance to report a breach caused by negligence may be related to potential penalties that may be levied on the covered entity that self-reports the breach. HIPAA specifies penalties that range from $50,000 to $250,000 including the potential of imprisonment from 1 year up to 10 years depending on the severity of the violation.

HITECH imposes penalties ranging from $100 to $1.5 million for uncorrected or repeated violations. (111th U.S. Congress, 2009, p. 272) Wilkes described HIPAA as “an expressive law with behavior-altering sanctions” that has shaped privacy culture and

PHI-sharing behavior within the medical community and wrote:

This disjointed HIPAA experience—arising from highly publicized, progressively larger fines and increased auditing, but not matched by an understanding of how to adequately prevent the same—has led to heavy overcompensation on preventative measures, at the expense of best patient care as well as privacy obsession among providers and, consequently, their patients.

The statute’s murky standards and tremendous potential for monetary and reputational penalties has taught the medical community at large to resist and even

176

fear sharing PHI. Providers are unwilling to share PHI with one another, and patients have learned to guard their medical records with similar obstinacy.

HIPAA statute, with its blurred standards and draconian penalties, has created a privacy paranoia in patients through their providers that has obstructed health- enhancing and cost-saving PHI-sharing between consenting providers. HIPAA, instead of enhancing physician collaboration, has actually inhibited patient care and cost health care systems hundreds of millions of dollars. (Wilkes, 2014, p. 1216)

Appari and Johnson wrote “An adverse view of HIPAA is also reflected in lower adoption rates of health information systems such as EMR bolstering the perception that privacy laws may actually have a negative effect on the ulterior goals of providing quality care at low cost” adding “hospitals in states with privacy laws were 24% less likely to adopt an EMR system. However, in states with no privacy laws, they found that a hospital’s adoption of EMR increases the likelihood of a neighboring hospital adopting

EMR by about 7%.” (Appari & Johnson, 2010)

The Ponemon Institute summarized the impacts of PHI breaches when they wrote that

65% of victims of medical identity theft had to pay an average of $13,500 and spend more than 200 hours to resolve a breach. In that same survey, 89% of victims reported embarrassment due to inadvertent disclosure of sensitive personal health conditions; 19% reported loss of career opportunities while 3% reported termination of employment.

(Ponemon Institute, 2015) Thus, while there is evidence that breaches of PHI data may be harmful, there is no evidence to suggest that PHI breaches are lethal.

The same may not be said for medical errors that may be caused by the sheer volume of behavioral changes expected of medical professions given the constantly changing

HIPAA procedural changes while being forced to use EHR – with substantial penalties for misuse – to make a diagnosis based on inaccessible or inaccurate medical information. Johnson and Warkentin wrote changes to HIPAA “result in managerial and

177

behavioral modifications that transcend mere technical controls” adding “all healthcare employees must alter the processes they have employed to create, handle, store, manipulate, and convey all data about patients.” (Johnston & Warkentin, 2008) Reducing errors by medical professionals that cause serious injury or death to patients would improve quality of medical care. James wrote:

Medical care in the United States is technically complex at the individual provider level, at the system level, and at the national level. The amount of new knowledge generated each year by clinical research that applies directly to patient care can easily overwhelm the individual physician trying to optimize the care of his patients. The United States trails behind other developed nations in implementing electronic medical records for its citizens. Hence, the information a physician needs to optimize care of a patient is often unavailable.

The true number of premature deaths associated with preventable harm to patients was estimated at more than 400,000 per year. Serious harm seems to be 10- to 20-fold more common than lethal harm. (James, 2013)

If Johnston and Warkentin are correct and errors by medical professions result in more than 400,000 deaths per year, how does the number of deaths caused by medical error relate to other causes of death? Makary and Daniel wrote:

Medical error is not included on death certificates or in rankings of cause of death.

The annual list of the most common causes of death in the United States, compiled by the Centers for Disease Control and Prevention (CDC), informs public awareness and national research priorities each year. The list is created using death certificates filled out by physicians, funeral directors, medical examiners, and coroners. However, a major limitation of the death certificate is that it relies on assigning an International Classification of Disease (ICD) code to the cause of death. As a result, causes of death not associated with an ICD code, such as human and system factors, are not captured. Comparing our estimate to CDC rankings suggests that medical error is the third most common cause of death in the US. (Makary & Daniel, 2016)

Review of CDC’s mortality ranking reveals that only heart disease and cancer cause more deaths annually than medical error. (Xu, Murphy, Kochanek, & Arias, 2016)

The American Public Health Association (APHA) wrote “despite spending more than twice what most other industrialized nations spend on health care, the U.S. ranks 24th out

178

of 30 such nations in terms of life expectancy.” (APHA, 2012) The intended outcome of federal investments and legislation in USNHC (e.g., HIPAA, HITECH, PPACA) was increased available of medical information at the doctor/patient interaction and at the national level for research. Since there is no readily available measurement for a positive outcome (i.e., no reports where medical information used but was not breached for example), USNHC has adopted negative measurements that do exist such as cost of care and breaches of PHI. MITRE wrote:

Health care is one of the largest segments of the US economy, approaching 20% of GDP. Despite the obvious technological aspects of modern medicine, it is one of the last major segments of the economy to become widely accepting of digital information technology, for a variety of practical and cultural reasons. That said, the adoption of electronic records in medicine has been embraced, particularly by health care administrators in the private sector and by the leaders of agencies of the federal and state governments with responsibility for health care. Although the transition to electronic records now seems a foregone conclusion, it is beset by many challenges, and the form and speed of that transition is uncertain. (MITRE Corporation, 2014, p. 9)

Analysis of Figure 4-4 suggests that the U.S. federal government may be investing more money to protect electronic patient information than actual patients.

179

180

Figure 4-4: USNHC Congressional and HHS Initiatives Overlaying Health Care Spending for 13 High-Income Countries (Squires & Anderson, 2015) who cited Organization for Economic Cooperation and Development (OECD) Health Data 2015

5 Synthesis & Discussion

You can observe a lot just by watching. Yogi Berra

This chapter summarizes the findings from application of SEMDAM via empirical case study of USNHC to validate the SEMDAM methodology.

5.1 Chapter Introduction

Section 1.2, Major Research Questions, identified the following research objectives:

 Develop a generalizable typology of existing and emerging set of SEMs based

on SE theory verified by descriptive method in Sections 2.3, Engineering

Ordered Systems (EOS), Section 2.4, Engineering Un-Ordered Systems

(EUOS), and Section 3.1.2.2, Defining candidate explanations H1, …, Hn and

qualitative method in Chapter 5;

 Develop a COSP sensing framework verified by descriptive method in Section

2.5, Cynefin sense-making Framework and qualitative method in Chapter 5;

 Develop a generalizable diagnostic assessment model (DAM) to assess COSP

and recommend SEM verified by descriptive and correlational methods in

Chapters 2 and 3 and qualitative method in Chapter 4; and,

 Demonstrate System Engineering Method Diagnostic Assessment Model

(SEMDAM) verified by qualitative method in Chapter 4.

5.2 Opportunities and Challenges of SEMDAM

The experience gained while developing and documenting SEMDAM supports the observation by Checkland “although we were armed with the methodology of systems engineering and were eager to use its techniques to help engineer real-work systems to achieve their objectives, the management situations we worked in were always too

181

complex for straightforward application of the systems engineering approach.”

(Checkland, 2000)

5.2.1 Generalized Typology of SEMs

Organization and presentation of TSM, SoSM, ESM, and CSM is based on typological analysis. Typological analysis is a strategy for descriptive qualitative (or quantitative) data analysis whose goal is the development of a set of related but distinct categories within a phenomenon, in this case the engineering of systems, that discriminate across the phenomenon. Typologies are characterized by categorization but not by hierarchical arrangement; the categories in a typology are related to one another, not subsidiary to one another. This research considered using system attributes and/or characteristics (e.g., managerial independence, emergence, etc.); however, results were inclusive. Many of the proposed system attributes and/or characteristics are not well defined. The SEM typology was specifically structured to provide a complexity appropriate SEM based on demonstrated domain knowledge. None of the SEMs is better than the other. They each come with benefits and limitations that need to be understood before application. Assuming that TSM will solve every problem eventually is high risk, while assuming that every program needs to address the underpinning assumptions of classical science is a high-risk approach as well. Understanding (and agreeing) that there are four tools in the toolbox and each of the tools works for classes of programs with varying degrees of complexity would be a benefit. Having a generally available, industry accepted ability to reliability and regularly measure COSP, thus ensuring that the right tool is in use, would be a significant benefit.

182

Identifying attributes and statistical methods required to verify the abductive logic of

SEMDAM validated that the SEM typology was appropriate and sufficient. Gorod,

Ghandi, White, Ireland, and Sauser provided external validation of the SEMs and proposed typology via description method review of Modern History of Systems of

Systems, Enterprises, and Complex Systems. (Gorod, Gandhi, White, Ireland, & Sauser,

2015)

Grouping SEMs into Engineering for Ordered Systems (EOS) and Engineering for

Un-Ordered Systems (EUOS) was based recognizing the similarities between TSM and

SOSM (e.g., assumptions of Newtonian Mechanics, Decomposition, Hierarchical

Management and Stable Environment apply) and ESM and CSM (e.g., assumptions may not apply). Application of SEMDAM to USNHC via empirical case study validated the observation of the need for a typology of SEMs arranged by increasing capability to support increased complexity.

5.2.2 Development of a COSP Sensing Framework

Review of Section 2, Literature Review, demonstrates that this research analyzed each of the potential sources of complexity (e.g., people, process, technology, stability, environment) thus ensuring that SEMDAM is indeed sensing the COSP. The case study validated that the number of SEMs and their proposed alignment with the Cynefin relationship, as demonstrated by paired attribute/method verification of cause and effect, was appropriate and sufficient. While other researchers have identified ‘white box’ objective measurements of system complexity (McCabe, Bar-Yam, IEEE, INCOSE,

Sheard, McEver, etc.), previous attempts to identify objective ‘black box’ measurements for assessing system complexity in situ were inclusive. (Sheard, MITRE)

183

SEMDAM is unique in that it does not assume external existence of any single attribute or combination of attributes may measure COSP in all situations. Rather it measures PM/SEM ability to sense COSP within the context of the SOI, measured by statistical significance of paired attribute/method projection or perception, of any single attribute or combination of attributes identified by PM/SEM.

5.2.3 Development of SEMDAM

Shenhar and Sauser wrote “One of the fundamental elements in understanding a system’s nature is the need to distinguish between the system type and its strategic problems, as well as its systems engineering and managerial problems” adding

“Therefore, no single approach can solve these emerging problems and thus no one strategy is best for all projects.” (Shenhar & Sauser, 2009, p. 123) SEMDAM was developed specifically to address the potential discrepancy between actual problem and understanding of COSP thus reducing the potential for miscategorization and potentially system failure. As part of the research to develop SEMDAM this work identified a useable typology of SEMs that are arranged to capitalize on the exiting body of SE knowledge yet perhaps crystalizing the current fuzzy distinctions between methods.

Sheard et al., wrote “A key first step is one of diagnosis – the systems engineer must identify the kind and extent of complexity that bears on the problem set. As we have seen, complexity can exist in the problem being addressed, in its environment or context, or in the system under consideration for providing a solution to the problem” adding “The diagnoses made will allow the systems engineer to tailor his/her approaches to key aspects of the systems engineering process: requirements elicitation, trade studies, the selection of a development process life cycle, solution architecting, system

184

decomposition and subsystem integration, test and evaluation activities, and others.”

(Sheard, et al., A Complexity Primer for Systems Engineers, 2016) SEMDAM was specifically built as a diagnostic model, using abductive logic, to facilitate a greater understanding of the problem space.

While SEMDAM does provide a recommended SEM, that may be a secondary consideration to the improvement in understanding of both requirements and skills that

SEMDAM exposes with use. Using SEMDAM is not a passive activity; program and technical leadership must identify potential attributes and statistical models which requires knowledge of the problem space (to identity attributes) and knowledge of SE theory (to identify and use statistical modelling tools). It is hard to imagine that a program manager would keep their position if there were unable to accurately predict future schedule events or were unable to perceive trends in resource utilization. Program management, as a body of knowledge, has developed the ‘triple constraint’ that sets an expectation and regularly measures results. In contrast, systems engineering leadership has no readily identifiable analogous set of expectations nor capabilities. This is not to imply that SE leadership doesn’t require skills, it is a realization that absent a defined set of expectations, each SE leadership position is dependent on the individual to set expectations and produce results.

If implemented and consistently used, SEMDAM will force development of greater understanding both the problem space and in the current and future SE methods which addresses Sheard et al., intended result:

In addition, the diagnosis will allow the systems engineer to consider whether there may be mechanisms for shifting complexity to a more desirable region of the problem space. There may be choices available or investments that can be made to allow the decoupling of aspects of the system or of the system to its environment. Likewise,

185

there may also be options for shaping the feedbacks within and across problem- environment-solution elements, allowing the complexity of the situation to be harnessed via the leveraging of beneficial adaptation and self-organization. (Sheard, et al., A Complexity Primer for Systems Engineers, 2016)

5.2.4 Demonstrate SEMDAM via Empirical Case Study

The contribution of this research is using analysis of cause and effect as a basis for recommending an appropriate SE method based on the Cynefin framework. SEMDAM evaluates the ability of PM/SEM to predict or perceive system outcomes subject to the constraint that the prediction or perception is statistically significant. Since this model relies on the ability of PM/SEM to predict or perceive system outcomes, this model measures the relationship between the observer (e.g., PM/SEM) and their observations of the system. There are two outcomes from using the SEMDAM model:

 One – the intended result – PM/SEM demonstrates sufficient understanding of the

system to be able to predict or perceive system output at a statistically significant

level thus identifying the appropriate level of complexity leading to selection of

the appropriate SE method; or,

 Two – the unintended result – PM/SEM is unable to demonstrate sufficient

understanding of the system in which case the organization responsible for

developing the system must decide either that the system is too complicated and

stop or conclude that the PM/SEM is inappropriate and select a different PM/

SEM to lead the system and apply the model.

Acknowledging DeRosa et al., who wrote that it is not possible to engineer complex systems using non-complex processes (DeRosa, Grisogono, Ryan, & Norman, 2008),

SEMDAM assists PM/SEM leadership ensure that they select and use a SEM that is

186

sufficiently complex to ensure that the SEM is the appropriate and qualified to address the level of complexity represented within the problem.

5.3 Chapter Summary

The efficacy and veracity of SEMDAM is based on accurate measurement of SEM prediction or perception of system outcomes by the PM/SEM which requires sufficient knowledge and understanding of the environment and problem (e.g., COSP) to proceed.

As described above, SEMDAM provides evidence such that the decision to stop system development or replace the program leadership (e.g., PM/SEM) may be based on accurate measurement and not a “best guess.” Either outcome results in reducing the risk of system misclassification and therefore, the risk of system failure. Reducing risk of system failure is a fundamental benefit for use of SEMDAM.

187

6 Conclusions

… whatever we perceive is organized into patterns for which we the perceivers are largely responsible … As perceives we select from all the stimuli falling on our senses only those which interest us, and our interests are governed by a pattern-making tendency, sometimes called a schema. In a chaos of shifting impressions, each of us constructs a stable world in which objects have recognizable shapes, are located in depth, and have permanence … As time goes on and experience builds up, we make greater investment in our systems of labels. So a conservative bias is built in. It gives us confidence. – Mary Douglas (Kurtz & Snowden, 2003)

6.1 Summary & Findings

Neither IEEE nor INCOSE address the need to select a SEM suitable for the COSP.

Assuming a reductionist, deterministic approach is appropriate for all classes of systems is high risk, often resulting in failure. (Gilbert & Yearworth, 2016) This research builds on the research of Sheard, and Gilbert and Yearworth who observed that the theoretical foundations for traditional system engineering (TSE) and project management are based on assumptions – Newtonian mechanics, analysis by decomposition, hierarchical management, and stable environment – which limit application of TSE to problems that do not meet those assumptions. (Sheard S. , Complex Adaptive Systems in Systems

Engineering and Management, 2009, p. 1287) Gilbert and Yearworth observed:

Systems Engineering development projects often fail to meet delivery expectations in terms of timescales and cost. Project plans, which set cost and deadline expectations, are produced and monitored within a reductionist paradigm, incorporating a deterministic view of cause and effect. This assumes that the cumulative activities and their corresponding durations that comprise the developed solution can be known in advance, and that monitoring and management intervention can ensure satisfactory delivery of an adequate solution, through implementation of this plan. (Gilbert & Yearworth, 2016)

Therefore, thus research identified, documented and described the underlying relationships that exist between several of the existing SEMs and classical sciences.

188

6.1.1 Summary of SE Theoretical Foundations

Review of the theoretical foundations for SE demonstrates that the engineering of systems is undergoing a transformation from being based exclusively on the classical sciences ‘system-as-machine’ paradigm to incorporating the systems sciences ‘system-as- organism’ paradigm. There are some very basic topics that highlight the various perspectives.

Are systems only man-made? One answer, based on classical sciences says ‘yes’.

IEEE 15288 wrote that the standard “describes the life cycle of systems created by humans” adding that it “concerns those systems that are man-made” (ISO/IEC/IEEE

15288:2015(E), 2015, p. 1) while INCOSE’s SEH “is consistent with ISO/IEC/IEEE

15288:2015 to ensure is usefulness across a wide range of applications – man-made systems and products.” (INCOSE SEH, 2015, p. 1) Another answer, based on systems sciences says ‘no’ that “What is to be defined and described as a system in not a question with an obvious or trivial answer.” Bertalanffy wrote:

It will be readily agreed that a galaxy, a dog, a cell and an atom are real systems; that is entities perceived in or inferred from observation, and existing independently of an observer. On the other hand, there are conceptual systems such as logic, mathematics (but e.g. also including music) which essentially are symbolic constructs; with abstracted systems (science) as a subclass of the later, i.e., conceptual systems corresponding with reality. (von Bertalanffy, The History and Status of General Systems Theory, 1972)

Are people to be considered components of the system? Most SE literature describes systems as some combination or variation of people, processes, and technology; however, remnants of the mechanistic perspective that people are not part of the system remain.

White described conventional methods writing, “these methods generally assume that the solution is primarily a system of hardware and software, that requirements are fully understood from the start, that the organization in charge of the system solely controls its

189

development and configuration, and that the external environment can be represented by interface specifications for machine interactions” (White B. E., 2010) As recently as

2012, Ramos, Ferreira, and Barcelo wrote:

The classical systems (i.e., the system-as-machine paradigm) were small to large- scale, multidisciplinary, relatively stable and predictable, without people as a component, and were typically from the aerospace and defense industries. The new ones (i.e., the system-as-organism paradigm), which must cope with the global challenges of sustainable development, are large scale, complex, adaptive, interoperable, scalable, technology- intensive, human integrative, and comprise; for example, the so-called “super systems,” like transportation and sustainable energy. (Ramos, Ferreira, & Barcelo, 2012)

6.1.2 Ordered vs. Un-Ordered Systems

Snowden wrote “At its simplest the difference between management in order and un- order can be summarized as follows. Ordered systems are those in which a desired output can be determined in advance and achieved through the application of planning based on a foundation of good data capture and analysis” adding “In un-ordered systems no output or outcome can be determined in advance in other than the most general terms, but we can manage the starting conditions and may achieve unexpected and more desirable goals than we could have imagined in advance, or we could just be more successful in avoiding failure.” (Snowden D. J., Multi-ontology sense making: a new simplicity in decision making, 2005)

In their 2003 article on systems of systems engineering Keating et al., state

"Engineers of future complex systems face an emerging challenge of how to address problems associated with integration of multiple complex systems” with the realization that "Complex systems that have been conceived, developed, and deployed as stand-alone systems to address a singular problem can no longer be viewed as operating in isolation.”

(Keating, et al., 2003) Hybertson and Sheard wrote:

190

It is necessary but not sufficient to define complex systems engineering in contrast to traditional systems engineering. Conceptualizing the systems of the past as machines and the systems of the future as organisms brings important concepts to the table, but it leaves out an important reality: most systems that systems engineers will develop in the future will be a combination of machines and people—not strictly one or the other. This also means that systems engineers will be creating systems that are neither wholly artificial (like the traditional machine) nor wholly natural (like human beings), but are a hybrid of both. (Hybertson & Scheard, 2008)

Review of SE literature suggests that is not possible to engineer complex systems using non-complex processes. Gorod et al., wrote “something more than traditional or conventional SE is needed to deal effectively with SoSs, enterprises, and CS, that is, what is termed SoSE, ESE, and CS engineering (CSE), respectively.” (Gorod, Gandhi, White,

Ireland, & Sauser, 2015, p. 25) DeRosa et al., asserted “there are classes of problems that require complex systems to deal with them” and “the engineering of a complex system is itself a complex problem.” (DeRosa, Grisogono, Ryan, & Norman, 2008) If a future SE practitioner only utilizes SEMDAM to decide of the problem is ordered versus un- ordered, that alone would be a benefit to the general body of SE practice.

6.1.3 Models & Patterns – Identifying the “Right Level”

Development of SEMDAM and application via empirical case study of USNHC require use of models to encapsulate the Cynefin sense-awareness framework into a usable method for system engineering. Use of SEMDAM assumes that the PM/SEM team understand both SEMDAM and the COSP. Lack of knowledge of COSP or inability to select attribute/statistical model pairs may impact results. Since SEMDAM relies on subjective measurement, and should be considered a recommendation – an assessment – not an exact measurement. In her dissertation, Sheard wrote of her planned contribution to provide an objective, measurable definition of system complexity as follows:

Essentially, measurements used in systems engineering and project management do not currently include measurements of complexity. Widely understood complexity

191

measurements exist mostly in theoretical fields and have not been applied to actual projects.

The bottom line is that there is no standard in measurement of complexity; indeed there is not even a solid body of work that addresses the measurement of complexity for systems engineering. The field is ripe for a contribution defining complexity specifically in a manner that can be used on systems. (Sheard S. A., Assessing the Impact of Complexity Attributes on System Development Project Outcomes, 2012)

While this research attempts to perform activity like Sheard in measuring complexity, it goes about it in very different ways. Sheard performed a survey of senior engineers to identify statistically significant attributes that correlated to eventual program success.

This research does not assume such an attribute exists, rather, it seeks to identify an attribute/statistical model pairing that, used in conjunction with the Cynefin sense- awareness framework, ensures that the PM/SEM leading the effort has sufficient understanding of the COSP to apply a complexity appropriate SEM. Recall that use of abductive logic does not guarantee a correct answer. While is appears contradictory,

SEMDAM provides an objective measurement (attribute/statistical model pair confidence) of a subject assessment (COSP/Cynefin cause & effect).

6.1.4 Identifying COSP using Classification versus Characteristics

Applying SEMDAM to the USNHC resulted in inferring COSP as complex and recommending ESM as the complexity appropriate SEM for USNHC which differs with

De Laurentis et al., who wrote “development of a National Healthcare Information

Network (NHIN) … will clearly be a very complex SoS.” (De Laurentis, et al., 2007)

Developing SEMDAM and applying SEMDAM to the USNHC to obtain results that do not agree with world recognized experts: De Laurentis, Dickerson, DiMarrio, Gartz,

Jamshisi, Nahavandi, Sage, Sloane, and Walker appears to discredit SEMDAM.

However, it is both proper and correct for a different PM/SEM to get different results

192

using SEMDAM. SEMDAM results that inferred COSP of USNHC is complex does not disagree with an expert assessment that USNHC has characteristics that make it look like a SoS. What makes SEMDAM unique, however, is that it requires demonstrable knowledge of COSP by PM/SEM to select attribute/statistical model pairs and does not rely on subjective assessment of unmeasurable system or environment characteristics.

6.1.5 Objective and Subjective System Complexity

Not unlike the previous situation where the SEMDAM inferred COSP was different than a static observation of characteristics, it is both proper and correct for a different

PM/SEM to get different results using SEMDAM. The different PM/SEM may have different – either more or less – understanding of COSP and therefore be able to manage development differently.

INCOSE and IEEE’s definition of system included two statements that highlighted that notation that a system is dependent on the frame of reference or perspective of different individuals and thus subjective {emphasis added}:

Systems may be configured with one or more of the following system elements: hardware, software, data, humans, processes (e.g., processes for providing service to users), procedures (e.g., operator instructions), facilities, materials, and naturally occurring entities. As viewed by the user, they are thought of a products or services;

and,

The perception and definition of a particular system, its architecture, and its system elements depend on a stakeholder’s interests and responsibilities. One stakeholder’s system-of-interest can be viewed as a system element in another stakeholder’s system-of-interest. Furthermore, a system-of-interest can be viewed as being part of the environment for another stakeholder’s system-of-interest. (ISO/IEC/IEEE 15288:2015(E), 2015, p. 11)

6.2 Areas for Future Research

The following topics are areas for future research.

193

6.2.1 Incorporating Uncertainty

Unlike deduction and induction, abduction can produce results that are incorrect within its formal system; however, it can still be useful as a heuristic, especially when something is known about the likelihood of different causes for b. The statement that

“correlation does not imply causation” is a frequently heard admonition to avoid the most frequent misuse of abductive reasoning: Post hoc ergo propter hoc (Latin: "after this, therefore because of this") which is the logical fallacy that states "Since event Y followed event X, event Y must have been caused by event X" which is often shortened simply to post hoc fallacy.

Abduction is the primary from of logic used by the medical community and detectives. The medical community, when providing a diagnosis, frequently recommends that the patient obtain a second opinion and often provides additional information to assist the patent in determining next steps. The legal community is founded on the premise that a defendant is innocent until proven guilty and in the case of a guilty verdict that may impose the death penalty automatically refers the case to an appeals court. Both the medical community and the legal community have developed safeguards when using abductive logic because abduction can produce results that are incorrect.

Incorporating uncertainty into the SEMDAM model would provide a measure of confidence in the SEM assessment that would assist PM/SEM in evaluating the need for action in the event that SEMDAM recommends a SEM that is materially different from the in situ SEM.

194

6.2.2 Adoption and Periodic Use of SEMDAM in a Program

Montgomery wrote “observing a system or process while it is in operation is an important part of the learning process, and is an integral part of understanding and learning about how systems and processes work.” (Montgomery, 2013, p. 1) SEMDAM supports two use cases:

 An aperiodic assessment tool for initial or one-time use; and,

 A periodic situational model to reassess appropriateness of an in situ SEM.

Sheard wrote “Whenever measurements such as complexity measurements are applied to real projects, there is very little literature showing what does or does not help projects to achieve improved outcomes.” (Sheard S. A., Assessing the Impact of

Complexity Attributes on System Development Project Outcomes, 2012) The empirical case study of USNHC demonstrated SEMDAM’s use as an aperiodic assessment tool.

Understanding COSP allows PM/SEM to tailor engineering activities that have different significance depending on the system complexity (e.g., interoperability, approaches to integration testing, interface management, and certification and accreditation {C&A}) that often depend on the specification, development, and refinement of a system or service model before the system or service exists. Additional research into using SEMDAM as a periodic situation model would be beneficial.

6.2.3 Further Research into Extrinsic and Intrinsic regulators

In describing environments, the realization that complexity may be related to our ability to predict observed patterns was described by Sheard et al., “emergent behavior, derived from the relationships among their elements and with the environment, via internal and external feedback loops, gives rise to observed patterns that may not be

195

understood or predicted.” (Sheard, et al., A Complexity Primer for Systems Engineers,

2016) Cannon researched the ability of organisms to self-regulate in response to external or internal stimulus writing:

Here, then, is a striking phenomenon. Organisms, composed of material which is characterized by the utmost inconsistency and unsteadiness, have somehow learned the methods of maintaining constancy and keeping steady in the presence of conditions which might reasonably be expected to prove profoundly disturbing.

The constant conditions which are maintained in the body might be termed equilibria. That word, however, has come to have fairly exact meeting as applied to relatively simple physico-chemical states, in closed systems, where known forces are balanced. The coordinated physiological processes which maintain most of the steady states in the organism are so complex and so peculiar to living beings that I have suggested a special designation for these states, homeostasis. (Cannon, 1932, pp. 21, 24)

Gorod et al., wrote “enterprise is defined as a system (typically more general than a

SoS) that has a particular property called homeostasis which is a form of stability generally attributable to the whole enterprise and not just one or more identifiable components” adding “Body temperature is an example of homeostasis in a human which most would agree is one’s personal enterprise.” (Gorod, Gandhi, White, Ireland, &

Sauser, 2015, p. 9)

Building on the concepts of internal and external feedback loops, Figure 6-1 presents

Ashby’s theory considering multiple sources of variety – Extrinsic Varity (V) and

Intrinsic Varity (V) – that behaves as a combined regulator V ≈V +V where V must be at least as great as the variety in the system being regulated V to be considered stable subject to Ashby’s original condition:

V ≥ V − (V +V) (18)

196

Figure 6-1: Ashby’s Theory of Requisite Variety Reflecting both Intrinsic and Extrinsic Regulators

Where:

V = Variety of Outcome measured logarithmically (i.e., Output)

V = Variety of Disturbance measured logarithmically (i.e., Input)

V = Variety of Extrinsic Regulator measured logarithmically (i.e., Convergence, Learning & Optimization)

V = Variety of Intrinsic Regulator measured logarithmically (i.e., Emergence, Adaption, Self-Organization, Autopoiesis & Homeostasis)

V = Variety of Combination of Extrinsic Regulator and Intrinsic Regulator measured logarithmically

Assuming V ≈V +V assumes coordinated action which may not be the case if the feedback loops do not communicate, coordinate or are not synchronized.

Unsynchronized feedback from multiple sources would negatively affect predicting outcomes and may result in complex or chaotic system behavior.

Depending on the COSP, a system may not contain intrinsic regulation. An Extrinsic

Regulator may not be aware of the presence of Intrinsic Regulator(s) or may view it/them to be non-distinguishable from the system. The concept from System Sciences that a

197

system is greater than the sum of its parts may be related to the presence of intrinsic regulation. Presence of behavior not attributable to parts of the system or planned interactions is the definition of emergence. Sillitto wrote:

Most definitions state that systems are made up of inter-connected parts and exhibit emergent properties – properties of the whole not uniquely attributable to any of the parts. Because of the nature of emergence, the system may have performance characteristics as well as function and behaviour that are qualitatively different from, though dependent on, those of its parts. Performance can be related to function (“how well”?) and to behaviour (“how quickly, how reliably”?). Structure ties this all together by defining the paths through which the parts of the system can interact with each other and the external environment. (Sillitto H. , 2012)

Further research into decomposition of Ashby’s regulator concept that considers the existence of extrinsic and intrinsic feedback would be beneficial. Is emergence related to/dependent on Intrinsic Feedback? It would be beneficial to research potential alignment of system self-regulation or intrinsic regulation, as a prerequisite for EUOS and thus a distinguishing characteristic between EOS and EUOS.

EUOS includes ESM and CSM which suggests that a potential foundation of what makes complex or chaotic systems impossible to predict a priori is the ability of that system to self-regulate behavior and by extension, output regardless of external, planned regulation. DeRosa wrote that “adaptation occurs when modifications create enterprise capabilities that succeed and endure in their environment” adding “we are experiencing in a very real way what the evolutionary philosopher Daniel Dunnett called a design without a designer.” (DeRosa, Introduction, 2011, p. 8)

198

References

104th U.S. Congress. (1996, Aug 21). Public Law 104-191. Health Information

Portability and Accountability Act (HIPAA). Washington, DC. Retrieved Oct 28,

2015, from https://www.gpo.gov/fdsys/pkg/PLAW-104publ191/pdf/PLAW-

104publ191.pdf

111th U.S. Congress. (2009, Feb 17). Public Law 111-5. American Recovery and

Reinvestment Act (ARRA) of 2009. Washington, DC: www.gpo.com. Retrieved

Oct 29, 2015, from http://www.gpo.gov/fdsys/pkg/PLAW-111publ5/html/PLAW-

111publ5.htm

111th U.S. Congress. (2010, Mar 23). Public Law 111-148. Patient Projection and

Affordability Care Act (PPACA). Washington, DC: www.gpo.gov. Retrieved Sep

14, 2015, from www.gpo.gov: http://www.gpo.gov/fdsys/pkg/PLAW-

111publ148/pdf/PLAW-111publ148.pdf

93rd U.S. Congresss. (1974, Dec 31). Public Law 93-579. The Privacy Act. Washington,

DC. Retrieved Oct 11, 2014, from https://www.gpo.gov/fdsys/pkg/STATUTE-

88/pdf/STATUTE-88-Pg1896.pdf

Ackoff, R. (1970). A Cconcept of Corporate Planning. Long Range Planning, 2-8.

doi:10.1016/0024-6301(70)90031-2

Agaku, I. T., Adisa, A. O., Ayo-Yusuf, O. A., & Connolly, G. N. (2014). Concern about

security and privacy, and perceived control over connection and use of health

information are related to withholding of health information from healthcare

providers. Journal of the American Medical Informatics Association, 374-8.

doi:10.1136/amiajnl-2013-002079

199

ANSI. (2012). The Financial Impact of Breached Protected Health Information. ANSI.

Retrieved Oct 15, 2015, from http://webstore.ansi.org/phi/

APHA. (2012). The Prevention and Public Health Fund: A critical investment in our

nations's physical and fiscal health. www.apha.org. Retrieved from

https://www.apha.org/~/media/files/pdf/factsheets/apha_prevfundbrief_june2012.

ashx

Appari, A., & Johnson, M. E. (2010). Information security and privacy in healthcare:

current state of research. International Journal of Internet and Enterprise

Management, 6(4), 279-314. doi:http://dx.doi.org/10.1504/ijiem.2010.035624

Aristotle. (350 B.C.). Metaphysics. Retrieved July 24, 2017, from

http://classics.mit.edu/Aristotle/metaphysics.8.viii.html

Arnold, A. J., & Fristrup, K. (1982). The Theory of Evolution by Natural Selection: A

Hierarchical Expansion. The Theory of Evolution by Natural Selection: A

Hierarchical Expansion, 8(2), 113-129. Retrieved Sep 27, 2017, from

http://www.jstor.org.proxygw.wrlc.org/stable/2400448

Ashby, W. R. (1956). An Introduction to Cybernetics. New York, NY: John Wiley &

Sons Inc.

Barrows, Jr., R. C., & Clayton, P. D. (1996, Mar/Apr). Privacy, Confidentiality, and

Electronic Medical Records. JAMIA, 3(2), 139-148.

doi:10.1136/jamia.1996.96236282

Bar-yam, Y. (2003). Dynamics of Complex Systems. Bounder, CO: Westview Press.

Retrieved from http://necsi.edu/publications/dcs/

200

Bellia, P. L. (2009). Federalization in Information Privacy Law. The Yale Law Journal,

868-900. Retrieved August 17, 2015, from

http://www.jstor.org.proxygw.wrlc.org/stable/40389431

Bharathy, G. K., & McShane, M. K. (2014). Applying a Systems Model to Enterprise

Risk Management. Engineering Management Journal, 26(4), 38-46. Retrieved

Jan 1, 2016, from http://search.proquest.com/docview/1630248775

Bradford, A. (2017, Jully 24). Deductive Reasoning vs. Inductive Reasoning. Retrieved

from livescience: https://www.livescience.com/21569-deduction-vs-

induction.html

Brougham, G. (2015). The Cynefin Mini-Book. Lulu.com. Retrieved from

https://www.infoq.com/articles/cynefin-introduction

Brown, S. F. (2009). Naivety in Systems Engineering Research: are we putting the

methodological cart before the philosophical horse? 7th Annual Conference on

Systems Engineering Research 2009 (CSER 2009), (pp. 1-4). Retrieved from

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.472.9493&rep=rep1&t

ype=pdf

Bush, G. W. (2004, April 30). EO 13335. Incentives for the Use of Health Information

Technology and Establishing the Position of the National Health Information

Technology Coordinator, 2. Washington, DC: www.gpo.gov. Retrieved Sep 24,

2015, from http://www.gpo.gov/fdsys/pkg/FR-2004-04-30/pdf/04-10024.pdf

Cannon, W. B. (1932). The Wisdom of the Body. New York: Norton.

201

Cate, F. H. (2006). The Failure of Fair Information Practice Principles. In Consumer

Protection in the Age of the Information Economy (pp. 343-379). Burlington, VT:

Ashgate. Retrieved Apr 22, 2013, from http://ssrn.com/abstract=1156972

Checkland, P. (2000). Soft Systems Methodology: A Thirty Year Retrospective. Systems

Research And Behavioral Science, S11-S58. doi:10.1002/1099-

1743(200011)17:1+<::aid-sres374>3.0.co;2-o

Conway, M. E. (1968). How do Committees Invent? Datamation, 14(4), 28-31. Retrieved

Sep 8, 2017, from http://www.melconway.com/Home/pdf/committees.pdf

Dahlberg, R. (2015). Resilience and Complexity. Journal of Current Cultural Research,

7, 541-557. Retrieved from http://www.cultureunbound.ep.liu.se

Dahmann, J. S., & Baldwin, K. J. (2008). Understanding the Current State of US Defense

Systems of Systems and Implications for Systems Engineering. 2008 2nd Annual

IEEE International Systems Conference, (pp. 3-9). Montreal, Canada.

doi:10.1109/SYSTEMS.2008.4518994

De Laurentis, D., Dickerson, C., DiMario, M., Gartz, P., Jamshidi, M. M., Nahavandi, S.,

. . . Walker, D. R. (2007, Sep). A Case for an International Consortium on

System-of-Systems Engineering. IEEE Systems Journal, 1(1), 68-73.

doi:10.1109/JSYST.2007.904242

Dennett, D. C. (1996). Darwin's Dangerous Idea. The Sciences, 35(3), 34-40.

doi:10.1002/j.2326-1951.1995.tb03633.x

DeRosa, J. K. (2011). Forward. In G. Rebovich, & B. E. White (Eds.), Enterprise

Systems Engineering - Advances in the Theory and Practice (pp. xii-x). Boca

Raton, FL: Taylor and Francis Group, LLC. Retrieved Jan 13, 2016

202

DeRosa, J. K. (2011). Introduction. In G. Rebovich Jr, & B. E. White (Eds.), Enterprise

Systems Engineering: Advanced in the Theory and Practice (pp. 1-30). Boco

Raton, FL: CRC Press. Retrieved Jan 1, 2016

DeRosa, J. K., Grisogono, A.-M., Ryan, A. J., & Norman, D. O. (2008). A Research

Agenda for the Enginerring of Complex Systems. SysCon. Montreal, Canada.

doi:10.1109/systems.2008.4518982

DeRosa, J. K., Rebovich Jr., G., & Swarz, R. S. (2006). An Enterprise System

Engineering Model. INCOSE International Symposium. 16, pp. 1423–1434.

INCOSE. doi:10.1002/j.2334-5837.2006.tb02823.x

Dewey, R. A. (2007). "The Whole is Other than the Sum of the Parts". Retrieved July 24,

2017, from Psychology: An Introduction: http://www.intropsych.com/

ch04_senses/whole_is_other_than_the_sum_of_the_parts.html

Dietz, J., Hoogervorst, J., Albani, A., Aveiro, D., Babkin, E., Barjis, J., . . . Winter, R.

(2013). The discipline of enterprise engineering. International Journal of

Organisational Design and Engineering, 3(1), 86. doi:10.1504/ijode.2013.053669

Dimitropoulos, L., & Rizk, S. (2009). A State-Based Approach to Privacy and Security

for Interoperable Health Information Exchange. Health Affairs, 28(2), 428-434.

doi:http://dx.doi.org/10.1377/hlthaff.28.2.428

Dodder, R., & Dare, R. (2000, Oct 31). Research Seminar in Engineering Systems.

Retrieved from www.mit.edu:

http://web.mit.edu/esd.83/www/notebook/ComplexityKD.PDF

Donabedian, A. (2005). Evaluating the Quality of Medical Care. The Milbank Quarterly,

44(3), 691-729. doi:10.1111/j.1468-0009.2005.00397.x

203

Einstein, A. (1934). On the Method of Theoretical Physics. v, 1(2), 163-169. Retrieved

from http://www.jstor.org/stable/184387

Elgass, S. C., Hawthore, L. S., Kaprielian, C. A., Kim, P. S., Miller, A. K., Ricci, L. R., .

. . Stuebe, R. E. (2011). Enteprise Activities - Evolving toward and Enterprise. In

G. Rebovich Jr., & B. E. White (Eds.). Boco Raton, FL: CRC Press. Retrieved Jan

6, 2013

Encyclopaedia Britannica. (2017). Definition of Entelechy. Retrieved from

https://www.britannica.com/topic/entelechy

Estefan, J. A. (2007). Survey of Model-Based Systems Engineering (MBSE)

Methodologies. INCOSE MBSE Focus Group, San Diego. Retrieved from

http://www.omgsysml.org/MBSE_Methodology_Survey_RevB.pdf

French, S. (2012). Cynefin, statistics and decision analysis. Journal of the Operational

Research Society, 64(4), 547-561. doi:10.1057/jors.2012.23

GAO. (2016). Healthcare.Gov Actions Needed to Enhance Information Security and

Privacy Controls. Washinton: US Government Accountability Office. Retrieved

Oct 13, 2016, from http://www.gao.gov/assets/680/676003.pdf

Gardner, B. (2013). Making sense of Enterprise 2.0. VINE, 43(2), 149-160.

doi:10.1108/03055721311329936

Gilbert, D., & Yearworth, M. (2016). Complexity in a Systems Engineering

Organization: An Empirical Case Study. Systems Engineering, 00(0).

doi:10.1002/sys.21359

Gilbertson, R., Tanju, B., & Eveleigh, T. (2017, September). A Complexity Based

Heuristic Decision Analysis Model to Recommend Systems Engineering Domain.

204

(G. Gaynor, Ed.) IEEE Enterprise Management Review, 45(3), 64-81.

doi:10.1109/EMR.2017.2734358

Gladwell, M. (2008). Outliers. New York: Little, Brown and Company.

Gorod, A., Gandhi, S. J., White, B. E., Ireland, V., & Sauser, B. (2015). Modern History

of System of Systems, Enterprise, and Complex Systems. In A. Gorod, S. J.

Gandhi, B. E. White, V. Ireland, & B. Sauser (Eds.), Case Studies in System of

Systems, Enterprise Systems, and Complex System Engineering (p. 779). Boca

Raton, FL: CRC Press Tauylor & Francis Group. Retrieved Nov 21, 2016

Gorod, A., Sauser, B., & Broadman, J. (2008). System-of-Systems Engineering

Management: A Review of Modern History and a Path Forward. IEEE Systems

Journal, 2(4), 484-499. doi:http://dx.doi.org/10.1109/jsyst.2008.2007163

Gorod, A., White, B. E., Ireland, V., Gandhi, S. J., & Sauser, B. (Eds.). (2015). Case

Studies in System of Systems, Enterprise Systems, and Complex Systems

Engineering. Baco Raton, FL: CRC Press Tailor & Francis Group. Retrieved Nov

21, 2016

Gorod, A., White, B. E., Ireland, V., Gandhi, S. J., & Sauser, B. (2015). Preface. In A.

Gorod, B. E. White, V. Ireland, S. J. Gandhi, & B. Sauser (Eds.), Case Studies in

System of Systems, Enterprise Systems, and Complex Systems Engineering (pp. xi-

xvi). Boca Raton, FL: CRC Press.

Grigoroudis, E., & Phillis, Y. A. (2013, Dec). Modeling Healthcare System-of-Systems:

A Mathematical Programming Approach. IEEE Systems Journal, 7(4), 571-580.

doi:10.1109/JSYST.2013.2251984

205

Gygi, C., Covey, S. R., DeCarlo, N., & Williams, B. D. (2012). Six Sigma for Dummies

(2nd ed.). Hoboken, NJ: Wiley.

Hadler, D. N. (2016, Aug 23). Saving the Doctor-Patient Relationship. (T. Ashbrook,

Interviewer) NPR.org. WAMU, Washington . Retrieved Aug 23, 2016, from

http://www.npr.org/podcasts/510053/on-point-with-tom-ashbrook

Hayenga, C. (2008). Complex and Complicated Systems Engineering. INCOSE Insight,

17-19. doi:10.1002/inst.200811117

HHS. (2009, Feb 17). Health Information Technology For Economic And Clinical Health

Act (HITECH). Retrieved June 17, 2015, from www.hhs.gov: http://

www.hhs.gov/ocr/privacy/hipaa/understanding/coveredentities/ hitechact.pdf

HHS. (2012, Mar 27). 77 FR 18310. Patient Protection and Affordable Care Act;

Establishment of Exchanges and Qualified Health Plans; Exchange Standards for

Employers. Washington, DC: www.gpo.gov. Retrieved Oct 13, 2016, from

https://www.gpo.gov/fdsys/pkg/FR-2012-03-27/pdf/2012-6125.pdf

HHS/ASPE. (1999, Nov 3). 64 FR 59918. Standards for Privacy of Individually

Identifiable Health Information. Washington, DC: www.gpo.gov. Retrieved from

https://www.gpo.gov/fdsys/pkg/FR-1999-11-03/pdf/99-28440.pdf

HHS/ASPE. (2000, Dec 28). 65 FR 82462. Standards for Privacy of Individually

Identificable Health Information, 82462-82829. Washington, DC: www.gpo.gov.

Retrieved Nov 5, 2014, from https://www.gpo.gov/fdsys/pkg/FR-2000-12-

28/pdf/00-32678.pdf

206

HHS/CMS. (2003, Feb 20). 68 FR 8334. Health Insurance Reform: Security Standards,

8334-8381. Washington, DC: www.gpo.gov. Retrieved Oct 28, 2015, from

https://www.gpo.gov/fdsys/pkg/FR-2003-02-20/pdf/03-3877.pdf

HHS/CMS. (2013, June 19). 78 FR 37032. Patient Protection and Affordable Care Act;

Program Integrity: Exchange, SHOP, Premium Stabilization Programs, and

Market Standards, 37032-37095. Washington, DC: www.gpo.gov. Retrieved Oct

13, 2016, from https://www.gpo.gov/fdsys/pkg/FR-2013-06-19/pdf/2013-

14540.pdf

HHS/CMS. (2014, Dec 31). Data And Program Reports - Centers For Medicare &

Medicaid Services: State Breakdown of Payments to Medicare and Medicaid

Providers through December 31, 2014. Washington, DC: Center for Medicare &

Medicaid Services (CMS). Retrieved June 23, 2015, from www.CMS.gov:

http://www.cms.gov/ Regulations-and-Guidance/ Legislation

/EHRIncentivePrograms/ DataAndReports.html

HHS/OCR. (2003, May). Summary of the HIPAA Privacy Rule. Retrieved Sep 17, 2014,

from Health & Human Services: http://www.hhs.gov/ocr/privacy/hipaa/

understanding/summary/privacysummary.pdf

HHS/OCR. (2009, Aug 24). 74 FR 42767. Federal Register, 74, 42740-42770.

Washington, DC. Retrieved Nov 2, 2015, from

http://www.gpo.gov/fdsys/pkg/FR-2009-08-24/pdf/E9-20169.pdf

HHS/OCR. (2013, Jan 25). 78 FR 5566. Modifications to the HIPAA Privacy, Security,

Enforcement, and Breach Notification Rules Under the Health Information

Technology for Economic and Clinical Health Act and the Genetic Information

207

Nondiscrimination Act; Other Modifications to the HIPAA Rules, 5586-5702.

Washington, DC: www.gpo.gov. Retrieved Sep 29, 2014, from www.gpo.com:

http://www.gpo.gov/fdsys/pkg/FR-2013-01-25/pdf/2013-01073.pdf

HHS/OCR. (2015, Aug 15). Portal Breaches Affecting 500 or More Individuals.

Retrieved Aug 15, 2015, from Ocrportal.hhs.gov:

https://ocrportal.hhs.gov/ocr/breach/breach_report.jsf

HHS/OCR. (2016). Breach Portal. Notice to the Secretary of HHS Breach of Unsecured

Protected Health Information. Washington, DC: ocrportal.hhs.gov. Retrieved Sep

15, 2017, from https://ocrportal.hhs.gov/ocr/breach/breach_report.jsf

HHS/OIG. (2014, Aug). OEI-03-14-00231. An Overview of 60 Contracts that

Contributed to the Development and Operation of the Federal Marketplace, 20.

Washington, DC: oig.hhs.gov. Retrieved Sep 14, 2015, from

http://oig.hhs.gov/oei/reports/oei-03-14-00231.asp

HHS/ONC. (2014). Report To Congress October 2014. Washington, DC: HHS.

Retrieved Aug 4, 2015, from http://www.healthit.gov/sites/default/files/

rtc_adoption_and_exchange9302014.pdf

HHS/ONC. (2015). Guide to Privacy and Security of Electronic Health Information.

Retrieved Dec 3, 2015, from www.healthit.gov: https://www.healthit.gov/

sites/default/files/pdf/privacy/privacy-and-security-guide.pdf

HIPAA Journal. (2015, Feb 5). HIPAA Breach Report Portal Changes. Retrieved Dec 18,

2015, from www.jipaajournal.com: http://www.hipaajournal.com/hhs-updates-

hipaa-data-breach-reporting-portal-014/

208

Holland, J. H. (1992, Winter). Complex Adaptive Systems. Daedalus, 121(1), 17-30.

Retrieved from http://www.jstor.org/stable/20025416

Hybertson, D., & Scheard, S. (2008). Integrating and Unifying Old and New Systems

Engineering Elements. Insight, 11(1), 13-16. doi:10.1002/inst.200811113

INCOSE. (2007). Systems Engineering Vision 2020. Retrieved Dec 12, 2016, from

INCOSE Library: connect.incose.org/Library

INCOSE. (2014). A world in motion: Systems Engineering Vision 2025. Retrieved Dec

12, 2016, from INCOSE Library: connect.incose.org/Library

INCOSE. (2016). What is Systems Engineering? Retrieved Aug 18, 2016, from

incose.org: http://www.incose.org/AboutSE/WhatIsSE

INCOSE SEBoK. (2016, Oct 28). Guide to the Systems Engineering Body of Knowledge

(SEBoK). Retrieved Nov 14, 2016, from sebokwiki.org: http://sebokwiki.org/

wiki/ Guide_to_the_ Systems_Engineering_ Body_of_Knowledge_(SEBoK)

INCOSE SEH. (2011, Oct). Systems Engineering Handbook. (C. Haskins, Ed.) Retrieved

Oct 25, 2013, from www.incose.org: http://www.incose.org/docs/default-

source/ProductsPublications/se-handbook-version-3-2-2.pdf

INCOSE SEH. (2015). Systems Engineering Handbook. (4), 284. (D. D. Walden, G. J.

Roedler, K. J. Forsberg, R. D. Hamelin, & T. M. Shortell, Eds.) San Diego, CA:

Wiley. Retrieved Dec 29, 2015, from International Council on Systems

Engineering (INCOSE): https://incose.ps.membersuite.com/

ISO/IEC/IEEE 15288:2015(E). (2015). International Standard - Systems and software

engineering - System life cycle processes (1 ed.). New York, NY: IEEE.

doi:10.1109/ieeestd.2015.7106435

209

James, J. T. (2013). A New, Evidence-based Estimate of Patient Harms Associated with

Hospital Care. Journal Of Patient Safety, 9(3), 122-128.

doi:10.1097/pts.0b013e3182948a69

JASON - The MITRE Corporation. (2014). A Robust Health Data Infrastructure.

Washington, DC: Healthit.gov. Retrieved Aug 5, 2015, from

http://www.healthit.gov/sites/default/files/ptp13-700hhs_white.pdf

Johnston, A. C., & Warkentin, M. (2008). Information privacy compliance in the

healthcare industry. Information Management & Computer Security, 16(1), 5-19.

doi:10.1108/09685220810862715

Jung, H.-Y., Unruh, M. A., Kaushal, R., & Vest, J. R. (2015). Growth Of New York

Physician Participation In Meaningful Use Of Electronic Health Records Was

Variable, 2011−12. Health Affairs, 34(6), 1035-1043.

doi:10.1377/hlthaff.2014.1189

Kantardzic, M. (2011). Data Mining: Comcepts, Models, Methods, and Algorithms (2nd

ed.). Hoboken, NJ: Wiley-IEEE Press.

Keating, C. B., Padilla, J. J., & Adams, K. (2008). System of Systems Engineering

Requirements: Challenges and Guidelines. Engineering Management Journal,

20(4), 24-31. Retrieved Jan 13, 2016, from

http://search.proquest.com/docview/208948101

Keating, C., Rogers, R., Unal, R., Dryer, D., Sousa-Poza, A., Stafford, R., . . . Rabadi, G.

(2003). System of Systems Engineering. Engineering Management Journal,

15(3), 36-45. doi:10.1080/10429247.2003.11415214

210

KING ET AL. v. BURWELL, SECRETARY OF HEALTH AND HUMAN SERVICES,

ET AL., 14-114 (United States Supreme Court June 25, 2014). Retrieved August

25, 2015, from http://www.supremecourt.gov/opinions/14pdf/14-114_qol1.pdf

Kurtz, C. F., & Snowden, D. J. (2003, Apr 6). The new dynamics of strategy: Sense-

making in a complex and complicated world. IBM Systems Journal, 42(3), 462-

483. doi:10.1147/sj.423.0462

Langton, C. G. (1990). Computation at the edge of chaos: Phase transitions and

emergent computation. Ann Arbor, MI: University of Michigan. Retrieved from

https://search-proquest-com.proxygw.wrlc.org/results/9FDEAC9798C34434PQ

Lorenz, E. N. (1963). Deterministic Nonperiodic Flow. Journal of the Atmospheric

Sciences, 20, 130-141.

Louis, M. R. (1980). Surprise and Sense Making: What Newcomers Experience in

Entering Unfamiliar. Administrative Science Quarterly, 25(2), 226-251.

doi:10.2307/2392453

Mach, A. L., & Redhead, C. S. (2014). Federal Funding for Health Insurance

Exchanges. Washington, DC: Congressional Research Service. Retrieved January

25, 2015, from http://digital.library.unt.edu/ark:/67531/metacd284482/

Maier, M. W. (1996). Architecting Principles for Systems-of-Systems. INCOSE

International Symposium, 565-573. doi:10.1002/j.2334-5837.1996.tb02054.x

Maier, M. W. (1998). Architecting Principles for Systems-of-Systems. Systems

Engineering, 267-284. doi:10.1002/(sici)1520-6858(1998)1:4<267::aid-

sys3>3.0.co;2-d

211

Makary, M. A., & Daniel, M. (2016, May 3). Medical error—the third leading cause of

death in the US. The BMJ (formerly the British Medical Journal), 1-5.

doi:10.1136/bmj.i2139

Manson, S. M. (2001). Simplifying complexity: a review of complexity theory.

Geoforum, 32(3), Geoforum. doi:10.1016/s0016-7185(00)00035-x

McEver, J. (2016, Nov 11). INCOSE Complexity Working Group. Retrieved Nov 14,

2016, from connect.incose.org:

https://connect.incose.org/WorkingGroups/ComplexSystems/Pages/Home.aspx

McGraw, D., Leiter, A., & Rasmussen, C. (2013, Oct). Rights and Requirements: A

Guide to Privacy and Security of Health Information in California. Retrieved Jul

25, 2015, from http://www.chcf.org/publications/2013/10 /rights-requirements-

privacy-security

McGregor, S. L., & Murnane, J. A. (2010). Paradigm, methodology and method:

intellectual integrity in consumer scholarship. International Journal of Consumer

Studies, 34(4), 419-427. doi:10.1111/j.1470-6431.2010.00883.x

Merriam-Webster. (2017). Definition of ANALYSIS. Retrieved from www.merriam-

webster.com: https://www.merriam-webster.com/dictionary/analysis

Merriam-Webster. (2017). Definition of ASSESSMENT. Retrieved from www.merriam-

webster.com: https://www.merriam-webster.com/dictionary/assessment

Merriam-Webster. (2017). Definition of HEURISTIC. Retrieved from www.merriam-

webster.com: https://www.merriam-webster.com/dictionary/heuristic

Merriam-Webster. (2017). Definition of THEORY. Retrieved from www.merriam-

webster.com: https://www.merriam-webster.com/dictionary/theory

212

Minitab. (2010). Choosing an Analysis. Statistical Inference and t-Tests. Minitab.

Retrieved June 2016, from

http://www.minitab.com/uploadedFiles/Documents/sample-

materials/TrainingTTest16EN.pdf

MITRE. (2014). Systems Engineering Guide. (G. Rebovich, Jr., Ed.) Retrieved Jul 20,

2016, from www.mitre.org:

http://www.mitre.org/sites/default/files/publications/se-guide-book-interactive.pdf

MITRE Corporation. (2014). A Robust Health Data Infrastructure. Washington, DC:

Healthit.gov. Retrieved Aug 5, 2015, from

http://www.healthit.gov/sites/default/files/ptp13-700hhs_white.pdf

Montgomery, D. C. (2013). Design and Analysis of Experiments (8 ed.). Hoboken, New

Jearsey: John Wiley & Sons.

Moore, I. N., Snyder, S. L., Miller, C., An, A. Q., Blackford, J. U., Zhou, C., & Hickson,

G. B. (2007). Confidentiality and Privacy in Health Care from the Patient's

Perspective: Does HIPAA Help? Health Matrix: Journal of Law Medicine, 17(2),

215-272. Retrieved Oct 27, 2015

Moskop, J. C., Marco, C. A., Larkin, G. L., Gelderman, J. M., & Derse, A. R. (2005,

Jan). From Hippocrates to HIPAA: Privacy and Confidentiality in Emergency

Medicine - Part I: Conceptual, Moral, and Legal Foundations. Annals of

Emergency Medicine, 45(1), 53-59. doi:10.1016/j.annemergmed.2004.08.008

NASA. (2009). Systems Engineering Handbook. Hanover, MD: NASA. Retrieved Jan 13,

2015

213

New World Encyclopedia. (2017). Abductive reasoning. Retrieved from

www.newworldencyclopedia.org:

http://www.newworldencyclopedia.org/entry/Abductive_reasoning

Nilsen, P. (2015). Making sense of implementation theories, models and frameworks.

Implementation Science, 10(1). doi:10.1186/s13012-015-0242-0

Nordstokke, D. W., & Zumbo, B. D. (2010). A New Nonparametric Levene Test for

Equal Variances. Psicológica, 401-430. Retrieved from

http://www.redalyc.org/html/169/16917017011/

Norman, D. O., & Kuras, M. L. (2006). Engineering Complex Systems. In

Understanding Complex Systems (pp. 200-245). Berlin Heidleberg: Springer.

doi:http://dx.doi.org/10.1007/3-540-32834-3_10

NPR Morning Edition (2016). Life Expectancy In U.S. Drops For First Time In Decades,

Report Finds. [NPR]. Washington, DC. Retrieved Dec 8, 2016, from

http://www.npr.org/sections/health-shots/2016/12/08/504667607/life-expectancy-

in-u-s-drops-for-first-time-in-decades-report-finds

Oliver, D. W., Kelliher, T. P., & Keegan, Jr., J. G. (1997). Engineering complex systems

with models and objects. New York: McGraw-Hill. Retrieved from

https://www.researchgate.net/publication/246068961

Orasanu, J. M., & Shafto, M. G. (2009). Designing for Cognitive Task Performance. In

A. P. Sage, & W. B. Rouse (Eds.), Handbook of Systems Engineering and

Management (2 ed., pp. 691-721). Hoboken, NJ: John Wiley & Sons.

Oxford Dictionaries. (2017). Definition of 'system' in English. Retrieved July 17, 2017,

from https://en.oxforddictionaries.com/definition/system

214

Patterson, F. G. (2009). Systems Engineering Life Cycles: Life Cycles for Research,

Development, Test, and Evaluation; Acquisition; and Planning and Marketing. In

A. P. Sage, & W. B. Rouse (Eds.), Handbook of Systems Engineering and

Management (2nd ed., pp. 65-116). Hoboken, NJ: John Wiley & Sons.

Piaszczyk, C. (2011). Model Based Systems Engineering with Department of Defense

Architectural Framework. Systems Engineering, 14(3), 305-326.

Ponemon Institute. (2015, Feb). Fifth Annual Study on Medical Identify Theft. Retrieved

Jun 18, 2015, from www.medidfraud.org: http://medidfraud.org/wp-

content/uploads/2015/02/2014_Medical_ID_Theft_Study1.pdf

Ramos, A. L., Ferreira, J. V., & Barcelo, J. (2012). Model-Based Systems Engineering:

An Emerging Approach for Modern Systems. IEEE Transactions on Systems,

Man, and Cybernetics, Part C (Applications and REviews), 101-111.

doi:10.1109/tsmcc.2011.2106495

Rebovich Jr, G., & White, B. E. (Eds.). (2011). Enterprise Systems Engineering:

Advances in the Theory and Practice. Boco Raton, FL: CRC Group. Retrieved

Jan 1, 2016

Rebovich Jr., G. (2008). The Evolution of Systems Engineering. 2nd Annual IEEE

Systems Conference. Montreal: IEEE. doi:10.1109/systems.2008.4518992

Rebovich Jr., G. (2011). Systems Thinking for the Enterprise. In J. G. Rebovich, & B. E.

White (Eds.), Enterprise Systems Engineering: Advances in the Theory and

Practice (pp. 31-61). Boca Raton, FL: CRC Press Taylor & Francis Group.

Retrieved Jan 6, 2016

215

Rhodes, D. H., Lamb, C. T., & Nightingale, D. J. (2008). Empirical Research on Systems

Thinking and Practice in the Engineering Enterprise. SysCom 2008. Montreal:

IEEE. doi:10.1109/SYSTEMS.2008.4519015

Rhodes, D., & Hastings, D. (2004). The Case for Evolving Systems Engineering as a

Field within Engineering Systems. 1st MIT Engineering Symposium (pp. 1-10).

Cambridge: MIT. Retrieved from http://cmapspublic3.ihmc.us/

Roberts, B., Mazzuchi, T., & Sarkani, S. (2016). Engineered Resilience for Complex

Systems as a Predictor for Cost Overruns. Systems Engineering, 19(2), 111-132.

doi:10.1002/sys.21339

Roedler, G. J., & Jones, C. (2005). Technical Measurement. INCOSE Measurement

Working Group. INCOSE. Retrieved from http://www.incose.org/docs/default-

source/ProductsPublications/technical-measurement-guide---dec-

2005.pdf?sfvrsn=4

Rout, T., Walker, A., & Dorling, A. (2017). Adopting the new standard for process

assessment. Software Quality Professional, 19(3), 4-12. Retrieved from

https://search-proquest-com.proxygw.wrlc.org/docview/1914142219

Russom, M. B., Sloan, R. H., & Warner, R. (2011). Legal Concepts Meet Technology: A

50-State Survery of Privacy Laws. Proceedings of the 2011 Workshop on

Governance of Technology, Information, and Policies. Orlando: ACM.

doi:10.1145/2076496.2076500

Ryan, A. (2008, Sep 10). What is a Systems Approach? Cornal University Nonlinear

Sciences, 39. Retrieved Sep 28, 2017, from https://arxiv.org/pdf/0809.1698.pdf

216

SAGE Research Methods. (2008). Typological Analysis. (L. M. Given, Ed.) SAGE.

doi:10.4135/9781412963909.n472

Sage, A. P. (2009). Systematic Measurements. In A. P. Sage, & W. B. Rouse (Eds.),

Handbook of Systems Engineering and Management (2 ed., pp. 575-644).

Hoboken, NJ: John Wiley & Sons. Retrieved Sep 18, 2013

Sage, A. P., & Cuppan, C. D. (2001). On the Systems Engineering and Management of

Systems of Systems and Federations of Systems. Information, Knowledge,

Systems Management, 325-345.

Sage, A. P., & Rouse, W. B. (2009). An Introduction to Systems Engineering and

Systems Management. In A. P. Sage, & W. B. Rouse (Eds.), Handbook of

Systems Engineering and Management (2 ed.). Hoboken, New Jersey: John Wiley

& Sons. Retrieved Sep 18, 2013

Sage, A. P., & Rouse, W. B. (Eds.). (2009). Handbook of Systems Engineering and

Management (2 ed.). Hoboken, New Jersey: Wiley.

Sage, A., & Biemer, S. (2007). Processes for System Family Architecting, Design, and

Integration. IEEE Systems Journal, 1(1), 5-16. doi:10.1109/jsyst.2007.900240

Salkind, N. J. (2012). Exploring Research (8th ed.). Boston: Pearson.

Schlager, K. J. (1956). Systems Engineering - Key to Modern Development. IRE

Transactions on Engineering Management, 3(3), 64-66. doi:10.1109/iret-

em.1956.5007383

Semanye's Case, 5 Co. Rep. 9 (English Reports 1604). Retrieved Sep 9, 2015, from

http://www.commonlii.org/int/cases/EngR/1572/333.pdf

Senge, P. M. (2000). The Fifth Discipline. New York: Doubleday.

217

Sheard, S. (2008). A Framework for System Resilience Discussions. INCOSE

International Symposium, 18 (1), pp. 1243-1257. doi:10.1002/j.2334-

5837.2008.tb00875.x

Sheard, S. (2009). Complex Adaptive Systems in Systems Engineering and Management.

In A. P. Sage, & W. B. Rouse (Eds.), Handbook of Systems Enginering and

Management (2 ed., pp. 1283-1318). Hoboken, NJ: John Wiley & Sons. Retrieved

Sep 18, 2013

Sheard, S. A. (2012). Assessing the Impact of Complexity Attributes on System

Development Project Outcomes. PhD Dissertation, Stevens Institute of

Technology, Hoboken. Retrieved Jan 18, 2017, from

http://search.proquest.com/docview/1033566354

Sheard, S. A., & Mostashari, A. (2009). Principles of Complex Systems for Systems

Engineering. Systems Engineering, 12(4), 295-311. doi:doi:10.1002/sys.20124

Sheard, S. A., & Mostashari, A. (2010). A Complexity Typology for Systems

Engineering. INCOSE International Symposium, 20 (1), pp. 933-945. Chicago, IL.

doi:10.1002/j.2334-5837.2010.tb01115.x

Sheard, S., Cook, S., Honour, E., Hybertson, D., Krupa, J., McEver, J., . . . White, B.

(2015). INCOSE Complex Systems Working Group White Paper - A Complexity

Primer for Systems Engineers. INCOSE. Retrieved May 5, 2016

Sheard, S., Cook, S., Honour, E., Hybertson, D., Krupa, J., McEver, J., . . . White, B.

(2016). A Complexity Primer for Systems Engineers. White Paper, INCOSE

Complex Systems Working Group.

218

Shenhar, A. J., & Bonen, Z. (1997). The New Taxonomy of Systems: Toward an

Adaptive Systems Engineering Framework. IEEE Transactions On Systems, Man,

And Cybernetics - Part A: Systems And Humans, 27(2), 137-145.

doi:http://dx.doi.org/10.1109/3468.554678

Shenhar, A. J., & Sauser, B. (2009). Systems Engineering Management: The

Multidisciplinary Discipline. In A. P. Sage, & W. B. Rouse (Eds.), Handbook of

Systems Engineering and Management (2 ed., pp. 117-154). Hoboken, NJ: John

Wiley & Sons.

Sillitto, H. (2012). Integrating Systems Science, Systems Thinking, and Systems

Engineering: understanding the differences and exploiting the synergies.

Proceedings ofthe 22nd Annual INCOSE International Symposium. 22, pp. 532-

547. Rome: INCOSE. doi:10.1002/j.2334-5837.2012.tb01354.x

Sillitto, H. G. (2009). On Systems Architects and Systems Architecting: some thoughts

on explaining and improving the art and science of systems architecting. INCOSE

International Symposium. 19, pp. 970-985. INCOSE. doi:10.1002/j.2334-

5837.2009.tb00995.x

Simon, H. A. (1962). The Architecture of Complexity. Proceedings of the American

Philosophical Society, 106(6), 467-482. Retrieved from

http://www.jstor.org.proxygw.wrlc.org/stable/985254

Sitton, M., & Reich, Y. (2015). Enterprise Systems Engineering for Better Operational

Interoperability. Systems Engineering, 18(6), 625-638. doi:10.1002/sys.21331

Smith, J. G. (1825). An Analysis of Medical Evidence Comprising Directions for

Practitioners, in the View of Becoming Witnesses in Courts of Justice. London:

219

Thomas and George Underwood. Retrieved Jan 28, 2015, from

https://books.google.com/books?id=oecGAAAAQAAJ

Smith, K. B. (2002). Typologies, Taxonomies, and the Benefits of Policy Classification.

Policy Studies Journal, 30(3), 379-395.

Snowden, D. (2011, Oct 22). Typology or Taxonomy? Retrieved Aug 21, 2017, from

cognitive-edge.com: http://cognitive-edge.com/blog/typology-or-taxonomy/

Snowden, D. J. (2005). Multi-ontology sense making: a new simplicity in decision

making. Journal Of Innovation In Health Informatics, 13(1), 45-53.

doi:10.14236/jhi.v13i1.578

Snowden, D. J., & Boone, M. E. (2007). A Leader's Framework for Decision Making.

Harvard Business Review, 68-76. Retrieved Jan 9, 2016, from

https://hbr.org/2007/11/a-leaders-framework-for-decision-making

Sober, E. (2012). Core Questions in Philosophy (6th ed.). Boston: Pearson Eduction.

Squires, D., & Anderson, C. (2015). U.S. Health Care from a Global Perspective. The

Commonwealth Fund. Retrieved Sep 14, 2017, from

http://www.commonwealthfund.org/publications/issue-briefs/2015/oct/us-health-

care-from-a-global-perspective

Stanford Encyclopedia of Philosophy. (2017, Apr 28). Abduction. Retrieved from

plato.stanford.edu: https://plato.stanford.edu/entries/abduction/

Stevens, R. (2008). Profiling Complex Systems. 2008 2nd Annual IEEE Systems

Conference. IEEE. doi:10.1109/systems.2008.4519017

The Employers' Liability Cases, Nos. 216, 222 (Circuit Courts of the U.S. for the

Western District of Tennessee and the Western District of Kentucky Jan 6, 1908).

220

Retrieved Aug 23, 2016, from

https://supreme.justia.com/cases/federal/us/207/463/case.html

Towers, J. (2016, Jan 11). The 5 Principles of Model Based Systems Engineering.

Retrieved Jan 11, 2016, from Incosewiki.info:

http://www.incosewiki.info/Documents/Site_Resources/Files/MBSE/

Trochim, W. M. (2006). Deduction & Induction. Retrieved from

www.socialresearchmethods.net:

http://www.socialresearchmethods.net/kb/index.php

Tumlinson, J. K. (2017). Ignorance and Apathy. Retrieved Aug 24, 2017, from

www.viewonline.com:

http://www.viewonline.com/pages/editorials/ignorance.htm

Tyson, P. (2001, Feb 27). The Hippocratic Oath Today. Retrieved Jul 25, 2015, from

Pbs.org: http://www.pbs.org/wgbh/nova/body/hippocratic-oath-today.html

U.S. Bill of Rights, Amendment IV (Three-fourths of the State Legislatures Dec 15,

1791). Retrieved Sep 8, 2015, from

http://www.archives.gov/exhibits/charters/bill_of_rights_transcript.html

Van Beurden, E. K., Kia, A. M., Zask, A., Dietrich, U., & Rose, L. (2011). Making sense

in a complex landscape: how the Cynefin Framework from Complex Adaptive

Systems Theory can inform health promotion practice. Health Promotion

International, 28(1), 73-83. doi:10.1093/heapro/dar089 von Bertalanffy, L. (1972). General System Theory (Revised ed.). New York: George

Braziller.

221

von Bertalanffy, L. (1972). The History and Status of General Systems Theory. Academy

of Management Journal, 15(4), 407-426. doi:http://dx.doi.org/10.2307/255139

Warren, S. D., & Brandeis, L. D. (1890). The Right to Privacy. Harvard Law Review,

4(5), 193-220. Retrieved Sep 29, 2014, from http://www.jstor.org/stable/1321160

Weckowicz, T. E. (2000). Ludwig von Bertalanffy A Pioneer of General Systems

Theory. Alberta: University of Alberta. Retrieved July 24, 2017, from

http://www.richardjung.cz/bert1.pdf

Weinberg, G. M. (1975). An Introduction to General Systems Thinking. New York: John

Wley & Sons.

White, B. E. (2010). Complex Adaptive Systems Engineering (CASE). IEEE Aerospace

and Electronic Systems Magazine, 25(12), 16-22.

doi:10.1109/MAES.2010.5638784

White, B. E. (2010). Systems Engineering Activities (SEA) Profiler. 8th Conference on

Systems Engineering Research (CSER). Hoboken, NJ. Retrieved from http://cau-

ses.net/papers/

White, B. E. (2016). On a Maturity Model for Complexity, Complex Systems, and

Complex Systems Engineering. International Conference On Software

Engineering, Mobile Computing And Media Informatics, (pp. 1-24). New Forest,

U.K. Retrieved from Personal Correnpondence

White, B. E., Gandhi, S. J., Gorod, A., Ireland, V., & Sauser, B. (2013). On the

importance and value of case studies. IEEE International Systems Conference

(Syscon) (pp. 114-122). IEEE. doi:10.1109/SysCon.2013.6549868

222

Wickramasinghe, N., Chalasani, S., Boppana, R. V., & Madni, A. M. (2007). Healthcare

System of systems. 2007 IEEE International Symposium on Systems of Systems

Engineering. San Antonio: IEEE. doi:10.1109/SYSOSE.2007.4304283

Wiener, N. (1961). Cybernetics. Cambridge: The M.I.T. Press.

Wilkes, J. J. (2014). The Creation of HIPAA Culture: Prioritizing Privacy Paranoia over

Patient Care. Brigham Young University Law Review, 1213-1249. Retrieved Oct

9, 2015, from http://search.proquest.com/docview/1716212527

World Bank. (2016). Health expenditure per capita (curent US$). Retrieved Oct 11,

2016, from data.worldbank.org:

http://data.worldbank.org/indicator/SH.XPD.PCAP

Xu, J., Murphy, S. L., Kochanek, K. D., & Arias, E. (2016, Dec). NCHS Data Brief No.

267. Mortality in the United States, 2015. Washsington, DC: CDC/NCHS.

Retrieved Sep 10, 2017, from

https://www.cdc.gov/nchs/products/databriefs/db267.htm

Yax, L. K. (2015). Annual Estimates of the Population for the United States, Regions,

States, and Puerto Rico: April 1, 2010 to July 1, 2013. Washington, DC: US

Census Bureau. Retrieved Jan 25, 2015, from

http://www.census.gov/compendia/statab/cats/population.html

Yetter, D. (2016, Jan 11). Bevin notifies feds he'll dismantle kynect. courier-journal.

Kentucky: USA Today Network. Retrieved Jan 12, 2016, from

http://www.courier-journal.com/story/news/politics/2016/01/11/bevin-notifies-

feds-hell-dismantle-kynect/78623024/

223

Appendix A COBPS

The Office of Civil Rights (OCR) within the Department of Health & Human

Services (HHS) administers and enforces the HIPAA Privacy, Security, and Breach

Notification Rules which include requirements for timeliness, method, and content of notification of all breaches of PHI affecting more than 500 individuals. (HHS/OCR,

2015) The term ‘‘breach’’ is defined in the HITECH act as “the unauthorized acquisition, access, use, or disclosure of protected health information which compromises the security or privacy of such information, except where an unauthorized person to whom such information is disclosed would not reasonably have been able to retain such information.”

(HHS, 2009) This section presents an overview of the requirements for breach notification, describes the activities completed as part of a longitudinal study and provides a summary of reported PHI breaches from 2011 through 2014.

A-1 Review of Breach Notification Requirements

In Public Law 111-5, dated 17 February 2009, Section 13402, Notification in case of

Breach, Paragraph (d), Timeliness of Notification, Paragraph (1) General, the HITECH

Act requires “all notifications required under this section shall be made without unreasonable delay and in no case later than 60 calendar days after the discovery of a breach by the covered entity involved (or business associate involved in the case of a notification) for breaches of unsecured PHI.” (111th U.S. Congress, 2009) Since the law stipulates ‘all notifications’ this requirement appears to apply to both notifications to the media and notifications to the Secretary.

224

A-1.1 Requirements: Notification to the Media

Per 45 CFR § 164.406, “when a breach of unsecured protected health information involving more than 500 residents of a State or jurisdiction, a covered entity shall, following the discovery of the breach as provided in 45 CFR § 164.404(a)(2), notify prominent media outlets serving the State or jurisdiction.” (HHS/OCR, 2009) Regarding notification timelines, 45 CFR § 164.406 states that “A covered entity shall provide the notification to the media without unreasonable delay and in no case later than 60 calendar days after discovery of a breach.” (HHS/OCR, 2013)

A-1.2 Requirements: Notification to the Secretary

Beginning September 23, 2009, HHS’ Interim final rule, 74 FR 42767 regarding breaches of unsecured protected health information involving 500 or more individuals, required “covered entities to notify the Secretary immediately.” (HHS/OCR, 2009) The

Interim final rule gives guidance that states, “we {HHS} interpret the term ‘immediately’ in the statue to require notification be sent … without unreasonable delay but in no case later than 60 days from discovery of the breach.” (HHS/OCR, 2009)

A-2 Longitudinal Study of HHS/OCR Breach Portal

USNHC Operations may result in a PHI Breach that is forwarded via the PHI Breach

Notification process to HHS/OCR Breach Portal and recorded in a public notice as shown in Figure A-1. This longitudinal study of reported breaches from 2009 through 2014 recorded in the HHS/OCR Breach Portal (HHS/OCR, 2009) and public notices began in

2014 with repeated visits to the HHS/OCR breach portal. With each visit, the reported

PHI breaches were archived which allowed for longitudinal analysis of the data set and changes over time.

225

A-2.1 Longitudinal Study Activities

When analysis of the most recent archive identified a corrupted or incomplete breach notice, breach attributes were verified using the previous archive copies. Where there were irreconcilable differences in the multiple data sets or incomplete entries, individual breach notifications were verified using associated public notices. After correlating breaches from previous extracts, each breach was analyzed to determine if the underlying cause of the breach was malicious or negligent. Each breach was studied to determine if the breach was a result of electronic or physical media as shown in Figure A-1.

Figure A-1: Activity Summary of the Longitudinal Study to Develop the Consolidated HHS/OCR Breach Portal Summary Data Set (COBPS)

Table A-1 describes the COBPS set of variables developed from the range of variables available from the HHS/OCR Breach Portal from 2009 through December 2014 and other study attributes designed to allow identification, and tracking and correlation of breach notices over time.

226

Table A-1: Longitudinal Variables for HHS/OCR Breach Portal Study

Attribute Description Range of Values Use Archive List Unique date of visit to portal Identify date or extract Status List Unchanged, Lost, Corrupted Track determined status Action List Posted or Updated Track determined action Name of Covered Name Free text (describing either OCR Current but Entity Type Covered Entity or Business Modified Associate) Business Associate Name Free text OCR Involved Discontinued State State of State Names or OCR Current covered entity Abbreviations Covered Entity Type List Healthcare Provider, Health OCR Current Plan, Healthcare Clearing House or Business Associate Individuals Affected Number 500 or greater OCR Current Date of Breach Date Date or Range of dates OCR Discontinued Breach Submission Date Date or Range of dates OCR Current Date Date Posted or Date Date OCR Updated Discontinued Type of Breach List Theft, Loss, Improper OCR Current Disposal, Unauthorized Access/Disclosure, Hacking/IT Incident, Unknown, Other Location of List Laptop, Desktop Computer, OCR Current Breached Network Server, E-mail, Information Other Portable Electronic Device, Other, Electronic Medical Record, Paper/Films Business Associate Boolean Yes or No OCR Present Discontinued Web Description Text Free text OCR Current Nature of Breach List Malicious, Negligent or This study Unknown Kind of Breach List Electronic, Physical or This study Unknown

227

A-2.2 Kind of Breach Analysis (‘Electronic’ vs ‘Physical’)

Analysis of each breach portal entry’s location of breached information attribute allowed for determining if the kind of the breach was electronic or physical. Specific

HHS/OCR Breach Portal entries that we determined to be electronic breaches contained the following descriptions: ‘Desktop Computer’, ‘Electronic Medical Record’, ‘Email’,

‘Laptop’, ‘Network Server’, and ‘Portable Electronic Device’. Specific HHS/OCR

Breach Portal entries that we determined to be physical breaches were described as

‘Paper’ or ‘Paper/Films’. Research and analysis of associated public notices determined the kind of breach if a breach entry was blank, incomplete, ‘other’ or ‘unknown’.

A-2.3 Nature of Breach Analysis (‘Malicious’ vs ‘Negligent’)

Analysis of each breach portal entry’s type of breach attribute allowed for determining if the nature of the breach was either malicious or negligent. Specific

HHS/OCR Breach Portal entries that we determined to be malicious breaches included:

‘Hacking/IT Incident’, ‘Unauthorized Access/Disclosure’, and ‘Theft’. Specific

HHS/OCR Breach Portal entries that we derived as negligent breaches included:

‘Improper Disposal,’ and ‘Loss’. Research and analysis of associated public notices determined the nature of breach if the breach description was blank, incomplete, ‘other’ or ‘unknown’.

A-2.4 Identification of State for Responsible Breach Entity

On 25 Jan 2013, the Secretary of Health and Human Services (HHS) released a final rule (HHS/OCR, 2013) making business associates of covered entities directly liable for compliance with HIPAA Privacy and Security Rules’ requirements effective 23

September 2013. This change required detailed analysis of each breach to determine if

228

the breach occurred before or after 23 September 2013 and, if a business associate was involved, what state and state laws were in effect.

A-3 Summary

State Health Environment Information Integrity (SHEII) was determined by dividing the total breaches of All covered entities or HCPs by state population. Table A-2 contains the longitudinal study summary of Reported PHI Breaches and SHEII by state.

The HHS/OCR Breach Portal exhibited significant structural changes to the breach portal data set as HHS/OCR installed a new OCR HIPAA Web Portal on 5 Feb 2015.

(HIPAA Journal, 2015) Major changes included a new web address, removal of the business associate attribute as a separate column and removal of the date of breach attribute. With that major system modification to HHS/OCR Breach Portal in early 2015, it was no longer possible to validate timeliness of notification, or identify the state of the

HCP or BA that was responsible for breaches reported after December 2014.

229

Table A-2: Consolidated HHS/OCR Breach Portal Set (COBPS) TABLE 1 FEDERAL GRANTS FOR STATE EXCHANGES, EVALUATION OF SUBSTANTIAL FEDERAL GRANT FOR STATE EXCHANGE (SFGSE ), PHI BREACHES (FOR ALL COVERED ENTITIES AND HCPS ONLY), POPULATIONS, SHEII(AB/M) AND SHEII (HCPB/M) FOR PHI BREACHES FROM 2009 THROUGH 2014 FEDERAL GRANTS ALL HCP STATE SHEII SHEII STATE/DISTRICT SFG TO STATES SE BREACHES BREACHES POPULATION (AB/M) (HCPB/M) ALABAMA $9,772,451 NO 15 10 4,849,377 3.0932 2.0621 ALASKA $0 NO 5 3 736,732 6.7867 4.0720 ARIZONA $30,877,097 NO 25 20 6,731,484 3.7139 2.9711 ARKANSAS $58,149,831 NO 8 5 2,966,369 2.6969 1.6856 CALIFORNIA $1,065,683,056 YES 125 87 38,802,500 3.2214 2.2421 COLORADO $178,931,023 YES 21 15 5,355,866 3.9209 2.8007 CONNECTICUT $164,466,462 YES 15 10 3,596,677 4.1705 2.7803 DELAWARE $21,258,247 NO 2 1 935,614 2.1376 1.0688 FLORIDA $0 NO 65 55 19,893,297 3.2674 2.7648 GEORGIA $1,000,000 NO 38 27 10,097,343 3.7634 2.6740 HAWAII $205,342,270 YES 1 1 1,419,561 0.7044 0.7044 IDAHO $69,395,587 NO 2 1 1,634,464 1.2236 0.6118 ILLINOIS $154,813,136 YES 57 36 12,880,580 4.4253 2.7949 INDIANA $7,895,126 NO 36 24 6,596,855 5.4571 3.6381 IOWA $59,683,889 NO 7 4 3,107,126 2.2529 1.2874 KANSAS $1,000,000 NO 7 6 2,904,021 2.4105 2.0661 KENTUCKY $253,698,351 YES 28 22 4,413,457 6.3442 4.9848 LOUISIANA $0 NO 9 8 4,649,676 1.9356 1.7205 MAINE $6,877,676 NO 1 1 1,330,089 0.7518 0.7518 MARYLAND $171,013,111 YES 17 7 5,976,407 2.8445 1.1713 MASSACHUSETTS $184,058,835 YES 34 22 6,745,408 5.0405 3.2615 MICHIGAN $41,517,021 NO 25 15 9,909,877 2.5227 1.5136 MINNESOTA $155,020,465 YES 23 9 5,457,173 4.2146 1.6492 MISSISSIPPI $38,039,341 NO 5 5 2,994,079 1.6700 1.6700 MISSOURI $21,865,716 NO 25 13 6,063,589 4.1230 2.1439 MONTANA $1,000,000 NO 5 4 1,023,579 4.8848 3.9079 NEBRASKA $6,481,838 NO 7 4 1,881,503 3.7204 2.1260 NEVADA $90,773,768 YES 6 5 2,839,099 2.1133 1.7611 NEW HAMPSHIRE $11,868,078 NO 4 4 1,326,813 3.0147 3.0147 NEW JERSEY $8,897,316 NO 16 7 8,938,175 1.7901 0.7832 NEW MEXICO $123,281,600 YES 11 9 2,085,572 5.2743 4.3154 NEW YORK $511,253,660 YES 70 46 19,746,227 3.5450 2.3296 NORTH CAROLINA $87,357,315 NO 35 26 9,943,964 3.5197 2.6147 NORTH DAKOTA $1,000,000 NO 3 2 739,482 4.0569 2.7046 OHIO $1,000,000 NO 32 25 11,594,163 2.7600 2.1563 OKLAHOMA $1,000,000 NO 8 6 3,878,051 2.0629 1.5472 OREGON $304,963,587 YES 16 13 3,970,239 4.0300 3.2744 PENNSYLVANIA $34,832,212 NO 42 30 12,787,209 3.2845 2.3461 RHODE ISLAND $140,410,091 YES 7 5 1,055,173 6.6340 4.7386 SOUTH CAROLINA $1,000,000 NO 14 6 4,832,482 2.8971 1.2416 SOUTH DAKOTA $6,879,569 NO 2 1 853,175 2.3442 1.1721 TENNESSEE $9,110,165 NO 35 18 6,549,352 5.3440 2.7484 TEXAS $1,000,000 NO 94 72 26,956,958 3.4870 2.6709 UTAH $6,407,987 NO 12 6 2,942,902 4.0776 2.0388 VERMONT $172,641,081 YES 1 1 626,562 1.5960 1.5960 VIRGINIA $15,862,889 NO 21 14 8,326,289 2.5221 1.6814 WASHINGTON $266,026,060 YES 27 20 7,061,530 3.8235 2.8322 WEST VIRGINIA $20,832,828 NO 5 4 1,850,326 2.7022 2.1618 WISCONSIN $999,873 NO 15 7 5,757,564 2.6053 1.2158 WYOMING $800,000 NO 4 2 584,153 6.8475 3.4238 DISTRICT OF $133,573,928 YES 9 2 601,723 14.9570 3.3238 COLUMBIA (DC) TOTAL $4,726,038,608 NA 1,097 746 318,198,163 NA NA NOTES: 17-YES AS OF 34-NO DEC 2014 End of Dissertation.

230