Applying the Cynefin Sense-Awareness Framework to Develop a Systems Engineering Method Diagnostic Assessment Model (SEMDAM)
by Russell L. Gilbertson
Bachelor of Electrical Engineering, June 1983, University of Minnesota Master of Business Administration, December 1985, Rensselaer Polytechnic Institute
A Dissertation submitted to
The Faculty of The School of Engineering and Applied Science of the George Washington University in partial fulfilment of the requirements for the degree of Doctor of Philosophy
January 19, 2018
Dissertation directed by
Bereket Tanju Professorial Lecturer in Engineering Management and Systems Engineering
and
Timothy Eveleigh Professorial Lecturer in Engineering Management and Systems Engineering
The School of Engineering and Applied Science of The George Washington University certifies that Russell L. Gilbertson has passed the Final Examination for the degree of
Doctor of Philosophy as of October 24, 2017. This is the final and approved form of the dissertation.
Applying the Cynefin Sense-Awareness Framework to Develop a Systems Engineering Method Diagnostic Assessment Model (SEMDAM)
Dissertation Research Committee
Bereket Tanju, Professorial Lecturer in Engineering Management and Systems Engineering, Dissertation Co-Director
Timothy Eveleigh, Professorial Lecturer in Engineering Management and Systems Engineering, Dissertation Co-Director
Shahram Sarkani, Professor of Engineering Management and Systems Engineering, Committee Member
Thomas Mazzuchi, Professor of Engineering Management and Systems Engineering & of Decision Sciences, Committee Member
Amir Etemadi, Assistant Professor of Engineering and Applied Science, Committee Member
ii
Dedication
I wish to dedicate this work to my wife, Ms. Debra Daniels who supported me throughout this endeavor and my mother, Ms. Sandra J. Gilbertson, who passed away before I completed this work, but inspired me to always keep learning.
In addition to proving to myself that I could do this, I wanted to demonstrate to my children and their spouses; Jason & Irina, Daniel & Tabitha; my step-children and their spouses, Eric, Melynda & Shawn; and my grandchildren, Elizabeth, Declan, and Finley, that it is never too late to learn.
iii
Acknowledgements
I would like to acknowledge the assistance, advice and guidance provided by Drs.
Bereket Tanju, Timothy Eveleigh, Thomas A. Mazzuchi, Shahram Sarkani, Steven
Stuban, and Jason Dever from the Department of Engineering Management & Systems
Engineering (EMSE), School of Engineering and Applied Science (SEAS) of The George
Washington University.
I would like to thank Dr. Jimmie McEver, John Hopkins University Applied Physics
Laboratory, and Dr. John MacCarthy, Director, Systems Engineering Education Program,
Institute for Systems Research, University of Maryland-College Park, for taking time to meet and discuss my research. Much of the research methodology and approach was developed while teaching at the A. James Clark School of Engineering, University of
Maryland for Dr. MacCarthy – an experience for which I am truly grateful.
I would also like to thank Dr. Sarah Sheard, Software Engineering Institute, Carnegie
Mellon University, and Dr. Brian E White, MITRE (Retired), for sharing their work electronically via ResearchGate and providing comments along the way.
This dissertation would have been much different without my George Washington
University systems engineering cohort classmates Dr. Alan Ravitz and Dr. Blake Roberts.
Alan peaked my interest in medical systems and introduced several technologies that helped significantly. Blake provided a sounding board and support throughout the entire process. Thank you, gentlemen.
Finally, I would not have thought that obtaining a PhD degree while working was possible if not for Dr. Jason Siebel who shared his positive experiences with the George
Washington University’s SEAS/EMSE program during a time we worked together.
iv
Abstract
Applying the Cynefin Sense-Awareness Framework to Develop a Systems Engineering Method Diagnostic Assessment Model (SEMDAM)
Different classes of problems warrant different classes of solutions. There is no agreed set of unified principles and models to support systems engineering use over a wide range of domains. Nor is there a set of consistent terminology and definitions. These two deficiencies impede the adoption of systems engineering and create problems. On schedule delivery of a system meeting stakeholder needs at an acceptable cost is dependent upon selection and application of a system engineering method (SEM) appropriate for the class of system problem (COSP). Real world problems possess a degree of complexity that requires a commensurately complex approach as stakeholders are demanding increasingly capable systems that are growing in complexity, yet complexity-related system misunderstanding is at the root of significant cost overruns and system failures. INCOSE and IEEE recommend system complexity as a basis for selection and tailoring of SE processes; however, neither society provides a definition of complexity nor a methodology for SEM selection. Selection of a complexity appropriate
SEM is dependent on understanding COSP which is currently difficult to define, observe, or measure. This research develops a diagnostic assessment model (DAM), based on the
Cynefin framework, that infers COSP and then recommends a complexity appropriate
SEM to reduce system miscategorization and therefore reduce the risk of system failure.
An empirical healthcare case study is used to demonstrate SEMDAM’s application and efficacy.
v
Table of Contents
Dedication ...... iii
Acknowledgements ...... iv
Abstract ...... v
Table of Contents ...... vi
List of Figures ...... xii
List of Tables ...... xiv
List of Acronyms ...... xv
1 Introduction ...... 1
1.1 GENERAL DESCRIPTION OF THE PROBLEM ...... 4
1.2 MAJOR RESEARCH QUESTIONS ...... 7
1.3 SIGNIFICANCE & JUSTIFICATION ...... 8
1.4 SCOPE AND LIMITATIONS ...... 9
1.5 OVERVIEW OF DISSERTATION ...... 10
2 Literature Review ...... 12
2.1 SE THEORETICAL FOUNDATIONS ...... 15
2.1.1 Theories from Philosophy ...... 16
2.1.2 Theories from Classical Sciences ...... 18
2.1.3 Theories from Systems Science ...... 20
2.1.4 Summary of SE Theoretical Foundations ...... 31
2.2 DEFINITION OF SYSTEM USED ...... 32
2.2.1 System Life Cycle Model ...... 33
2.2.2 System Function ...... 34
2.2.3 System Structure...... 35
vi
2.2.4 System Behavior ...... 36
2.3 ENGINEERING ORDERED SYSTEMS (EOS) ...... 37
2.3.1 EOS Standards of Practice ...... 40
2.3.2 Classical Sciences Assumptions Underpinning EOS ...... 42
2.3.3 Codifying TSM ...... 48
2.3.4 Codifying SoSM ...... 50
2.4 ENGINEERING UN-ORDERED SYSTEMS (EUOS) ...... 54
2.4.1 EUOS Standards of Practice ...... 55
2.4.2 Codifying ESM ...... 56
2.4.3 Codifying CSM ...... 59
2.5 CYNEFIN SENSE-MAKING FRAMEWORK ...... 60
2.5.1 Introduction to Sense-Making ...... 60
2.5.2 History of the Cynefin Framework ...... 61
2.5.3 Cynefin Complexity Domains ...... 65
2.5.4 Cynefin Summary ...... 72
2.6 DEFINING SYSTEM & MODEL COMPLEXITY ...... 73
2.6.1 Definition for System Complexity ...... 74
2.6.2 Complexity of Technology Elements ...... 75
2.6.3 Complexity of Process Elements ...... 76
2.6.4 Complexity of Human Elements ...... 77
2.6.5 Complexity of Environments ...... 78
2.6.6 Combining People, Process, Technology and Environment Complexity ...... 79
2.7 MODELLING COMPLEXITY ...... 82
vii
2.7.1 Definition of Model Complexity ...... 83
2.7.2 Identification of SEMs...... 84
2.7.3 Alignment of COSP and SEMs ...... 86
2.8 LITERATURE REVIEW SUMMARY ...... 89
3 SEMDAM Methodology ...... 90
3.1 RESEARCH DESIGN ...... 91
3.1.1 Research Context ...... 92
3.1.2 Methodological Formulation ...... 94
3.1.3 Alternatives Considered ...... 98
3.1.4 Plans for Validation ...... 108
3.2 SEMDAM INTRODUCTION & SYSTEMATIC DESCRIPTION ...... 112
3.2.1 Step 1 – Gather Evidence of SE&M Activity ...... 115
3.2.2 Step 2 – Evaluate Hypothesis ABD1 & Interpret if COSP is Disorder ...... 116
3.2.3 Step 3 – IV&V Program and SE Management...... 117
3.2.4 Step 4 – IV&V Business or Mission Analysis...... 118
3.2.5 Step 5 – Obtain Evidence for Cause and Effect Analysis ...... 119
3.2.6 Step 6 – Evaluate Hypothesis ABD2 & Interpret if COSP is Known ...... 121
3.2.7 Step 7 – Evaluate Hypothesis ABD3 & Interpret if COSP is Knowable ...... 122
3.2.8 Step 8 – Evaluate Hypothesis ABD4 & Interpret if COSP is Complex ...... 123
3.2.9 Step 9 – Evaluate Hypothesis ABD5 & Interpret if COSP is Chaos ...... 124
3.2.10 Step 10 – Recommend SEM based on Inferred COSP ...... 125
3.3 ATTRIBUTE SELECTION METHOD ...... 126
3.3.1 Valuable, Nontrivial, and Measurable Data...... 127
viii
3.3.2 Attribute Data Types ...... 127
3.3.3 Identification of Key Decision Attributes by Design of Experiment ...... 128
3.3.4 Identification of Key Decision Attributes by Observational Study ...... 130
3.3.5 Describing Trends in Attribute Data ...... 132
3.4 STATISTICAL MODEL SELECTION ...... 132
3.4.1 Linear or Logistic Regression ...... 133
3.4.2 Analysis of Variance (ANOVA) ...... 134
3.4.3 Two-Sample t-Test ...... 135
3.4.4 Data Qualification Methods ...... 135
3.5 CHAPTER SUMMARY ...... 138
4 SEMDAM Applied to an Empirical Case Study ...... 139
4.1 USNHC – EMPIRICAL CASE STUDY OVERVIEW ...... 140
4.1.1 Evidence of SE&M or other Assessments ...... 147
4.1.2 Stakeholder Needs ...... 148
4.1.3 Statement of the Problem or Opportunity ...... 149
4.1.4 Sources of Data (Attributes) ...... 150
4.2 APPLYING SEMDAM TO USNHC CASE STUDY ...... 156
4.2.1 Task 1 (Step 1) – Gather Evidence of SE&M Activity ...... 156
4.2.2 Task 2 (Step 2) – Evaluate Hypothesis ABD1 – COSP is Disorder ...... 156
4.2.3 Task 3 (Step 3) – IV&V Program & Systems Engineering Management ....157
4.2.4 Task 4 (Step 4) – IV&V Business or Mission Analysis ...... 157
4.2.5 Task 5 (Step 5) – Obtain Evidence for Cause and Effect Analysis ...... 158
4.2.6 Task 6 (Step 6) – Evaluate Hypothesis ABD2 – COSP is Known ...... 160
ix
4.2.7 Task 7 (Step 5) – Obtain Evidence for Cause and Effect Analysis ...... 161
4.2.8 Task 8 (Step 7) – Evaluate Hypothesis ABD3 – COSP is Knowable ...... 165
4.2.9 Task 9 (Step 5) – Obtain Evidence for Cause and Effect Analysis ...... 166
4.2.10 Task 10 (Step 8) – Evaluate Hypothesis ABD4 – COSP is Complex ...... 172
4.2.11 Task 11 (Step 10) – Recommend SEM based on Inferred COSP ...... 172
4.3 SUMMATION OF USNHC CASE STUDY ...... 173
4.3.1 ‘Striking a Balance’ between ‘Uses and Disclosures’ ...... 173
4.3.2 Improving the Operations of the Health Care Systems ...... 174
4.3.3 Reducing Administrative Costs ...... 175
4.3.4 USNHC Empirical Case Study Summary & Conclusions ...... 176
5 Synthesis & Discussion ...... 181
5.1 CHAPTER INTRODUCTION ...... 181
5.2 OPPORTUNITIES AND CHALLENGES OF SEMDAM ...... 181
5.2.1 Generalized Typology of SEMs ...... 182
5.2.2 Development of a COSP Sensing Framework ...... 183
5.2.3 Development of SEMDAM ...... 184
5.2.4 Demonstrate SEMDAM via Empirical Case Study ...... 186
5.3 CHAPTER SUMMARY ...... 187
6 Conclusions ...... 188
6.1 SUMMARY & FINDINGS ...... 188
6.1.1 Summary of SE Theoretical Foundations ...... 189
6.1.2 Ordered vs. Un-Ordered Systems ...... 190
6.1.3 Models & Patterns – Identifying the “Right Level” ...... 191
x
6.1.4 Identifying COSP using Classification versus Characteristics ...... 192
6.1.5 Objective and Subjective System Complexity ...... 193
6.2 AREAS FOR FUTURE RESEARCH ...... 193
6.2.1 Incorporating Uncertainty ...... 194
6.2.2 Adoption and Periodic Use of SEMDAM in a Program ...... 195
6.2.3 Further Research into Extrinsic and Intrinsic regulators ...... 195
References ...... 199
Appendix A COBPS ...... 224
A-1 REVIEW OF BREACH NOTIFICATION REQUIREMENTS ...... 224
A-1.1 Requirements: Notification to the Media ...... 225
A-1.2 Requirements: Notification to the Secretary ...... 225
A-2 LONGITUDINAL STUDY OF HHS/OCR BREACH PORTAL...... 225
A-2.1 Longitudinal Study Activities ...... 226
A-2.2 Kind of Breach Analysis (‘Electronic’ vs ‘Physical’) ...... 228
A-2.3 Nature of Breach Analysis (‘Malicious’ vs ‘Negligent’) ...... 228
A-2.4 Identification of State for Responsible Breach Entity ...... 228
A-3 SUMMARY ...... 229
xi
List of Figures
FIGURE 1-1: RESEARCH AND DISSERTATION ORGANIZATION FOR DEVELOPING SYSTEM
ENGINEERING METHOD DIAGNOSTIC ASSESSMENT MODEL (SEMDAM) ...... 10
FIGURE 2-1: A SYSTEM ENGINEERING “METHOD” IS UTILIZED TO DELIVER A “SYSTEM” .. 14
FIGURE 2-2: GRAPHICAL VIEW ASHBY’S THEORY OF REQUISITE VARIETY ...... 24
FIGURE 2-3: SYSTEM WITH ASHBY’S THEORY OF REQUISITE VARIETY ...... 36
FIGURE 2-4: MAKING A PREDICTION OR PERCEIVING A TREND ...... 37
FIGURE 2-5: DEVELOPMENT TIMELINE FOR EOS STANDARDS OF PRACTICE ...... 41
FIGURE 2-6: SNOWDEN'S LANDSCAPE OF MANAGEMENT PROVIDES INSIGHT INTO HIS
INITIAL RESEARCH INTERTWINING ONTOLOGY AND EPISTEMOLOGY ...... 61
FIGURE 2-7: 2003 VERSION OF THE CYNEFIN FRAMEWORK BY KURTZ AND SNOWDEN ..... 63
FIGURE 2-8: 2007 VERSION OF THE CYNEFIN FRAMEWORK BY SNOWDEN AND BOONE ..... 64
FIGURE 2-9: DOMAINS OF UN-ORDERED, ORDERED AND DISORDER ...... 66
FIGURE 2-10: ORDERED DOMAINS INCLUDE THE SIMPLE DOMAIN AND THE
COMPLICATED DOMAIN ...... 67
FIGURE 2-11: UN-ORDERED DOMAINS INCLUDE THE COMPLEX DOMAIN AND THE
CHAOTIC DOMAIN ...... 67
FIGURE 2-12: REPRESENTATION OF SYSTEM OF INTEREST (SOI) ...... 75
FIGURE 2-13: GRAPHICAL SUMMARY OF COMPLEXITY MEASUREMENTS FOR SYSTEM
STATES, TECHNOLOGY, PROCESS, PEOPLE/WORKFORCE, AND ENVIRONMENT ...... 80
FIGURE 2-14: PROPOSED TYPOLOGY OF SEMS IN RELATION TO COMPLEXITY ...... 86
FIGURE 3-1: THE INCOSE COMPLEX SYSTEMS WORKING GROUP USE OF THE CYNEFIN
FRAMEWORK TO IDENTIFY CLASSES OF SYSTEMS PROBLEMS ...... 93
FIGURE 3-2: MITRE'S ENTERPRISE SYSTEMS ENGINEERING PROFILER IS ORGANIZED
INTO FOUR QUADRANTS AND THREE RINGS...... 100
FIGURE 3-3: EXAMPLE SITUATIONAL ASSESSMENT FROM SEA PROFILER ...... 102
FIGURE 3-4: SHEARD'S TYPES OF COMPLEXITY FRAMEWORK APPLIED TO ENTITIES ...... 103
FIGURE 3-5: DEDUCTIVE REASONING BEGINS WITH THEORY AND IS THEREFORE
DEPENDENT ON EXISTENCE OF ACCEPTABLE THEORY ...... 105
FIGURE 3-6: INDUCTIVE REASONING BEGINS WITH OBSERVATION AND IS THEREFORE
DEPENDENT ON EXISTENCE OF MEASUREMENT ...... 106
xii
FIGURE 3-7: WHITE ON HOW PRACTICE DRIVES THEORY IN THE YEARS SINCE 1950 ...... 110
FIGURE 3-8: RELATIVE DIFFICULTY OF ENGINEERING VARIOUS TYPES OF SYSTEMS ...... 110
FIGURE 3-9: SYSTEM ENGINEERING METHOD DIAGNOSTIC ASSESSMENT MODEL (SEMDAM) ...... 114
FIGURE 3-10: GENERAL MODEL OF A PROCESS OR SYSTEM ...... 129
FIGURE 3-11: SEMDAM ATTRIBUTE SELECTION IS BASED ON SIX SIGMA’S
PROGRESSIVE AND ITERATIVE APPROACH TO IDENTIFY CRITICAL OUTCOMES ...... 132
FIGURE 3-12: CHOOSING AN ANALYSIS: ATTRIBUTION SELECTION IMPACTS
STATISTICAL METHOD SELECTION ...... 133
FIGURE 4-1: FUNCTIONS PERFORMED BY THE VARIOUS TYPES OF MARKETPLACES ...... 145
FIGURE 4-2: HEALTHCARE.GOV AND ITS SUPPORTING SYSTEMS ...... 146
FIGURE 4-3: TWO PRIMARY INDIVIDUAL INTERACTIONS WITH USNHC: ACQUIRE
MEDICAL COVERAGE & OBTAIN MEDICAL ASSISTANCE ...... 167
FIGURE 4-4: USNHC CONGRESSIONAL AND HHS INITIATIVES OVERLAYING HEALTH
CARE SPENDING FOR 13 HIGH-INCOME COUNTRIES ...... 180
FIGURE 6-1: ASHBY’S THEORY OF REQUISITE VARIETY REFLECTING BOTH INTRINSIC
AND EXTRINSIC REGULATORS ...... 197
FIGURE A-1: ACTIVITY SUMMARY OF THE LONGITUDINAL STUDY TO DEVELOP THE
CONSOLIDATED HHS/OCR BREACH PORTAL SUMMARY DATA SET (COBPS) ...... 226
xiii
List of Tables
TABLE 2-1: ASHBY’S DEFINED SYSTEM STATES ...... 24
TABLE 2-2: LANGTON & WOLFRAM’S DEFINED STATES FOR CA THAT SUPPORT
UNIVERSAL COMPUTATION ...... 28
TABLE 2-3: TOWERS LEVELS OF APPROPRIATE COMPETENCIES ...... 29
TABLE 2-4: EOS STANDARDS OF PRACTICE ...... 40
TABLE 2-5: ASSUMPTIONS IN ORDERED (TRADITIONAL) SYSTEMS ENGINEERING ...... 43
TABLE 2-6: EUOS STANDARDS OF PRACTICE ...... 55
TABLE 2-7: SUMMARY OF CYNEFIN FRAMEWORK DOMAIN NAMES AND MULTI-
ONTOLOGICAL FOUNDATIONS ...... 72
TABLE 2-8: SUMMARY OF 15504 PROCESS ASSESSMENT RANKING SCALE ...... 77
TABLE 2-9: SUMMARY OF COMPLEXITY MEASUREMENTS FOR SYSTEM STATES,
TECHNOLOGY, PROCESS, PEOPLE/WORKFORCE, AND ENVIRONMENT ...... 79
TABLE 2-10: SEMDAM ALIGNMENT BETWEEN INCOSE’S COMPLEXITY WORKING
GROUP, CYNEFIN’S TYPOLOGY OF OPERATING ENVIRONMENTS AND SEMDAM
CANDIDATE EXPLANATIONS FOR COSP ...... 84
TABLE 2-11: PROPOSED ALIGNMENT BETWEEN INFERRED COSP AND COMPLEXITY
APPROPRIATE SEM ...... 88
TABLE 3-1: DEFINITION OF EVIDENCE E FOR SEMDAM ...... 98
TABLE 3-2: SUMMARY OF ALTERNATIVES CONSIDERED ...... 98
TABLE 3-3: CHARACTERIZATION OF DATA TYPES ...... 128
TABLE A-1: LONGITUDINAL VARIABLES FOR HHS/OCR BREACH PORTAL STUDY ...... 227
TABLE A-2: CONSOLIDATED HHS/OCR BREACH PORTAL SET (COBPS) ...... 230
xiv
List of Acronyms
ANOVA Analysis of Variance BMA Business or Mission Analysis CAS Complex Adaptive Systems COBPS Consolidated OCR Breach Portal Set COSP Class of System Problem CSE Complex Systems Engineering CSM Complex Systems Methods DAM Diagnostic Assessment Model DoE Design of Experiments DV Dependent Variable EHR Electronic Health Record EMR Electronic Medical Record EOS Engineering Ordered Systems ESE Enterprise Systems Engineering ESM Enterprise Systems Methods EUOS Engineering Un-Ordered Systems FDSH Federal Data Services Hub FFE Federally Funded Exchange FM Federal Marketplace HCP HealthCare Provider HCPB HealthCare Provider Breach HIE Health Insurance Exchange HIPAA Health Insurance Portability and Accountability Act HSoS Healthcare System of Systems IA Individuals Affected IEC International Electrotechnical Commission IEEE Institute of Electrical and Electronics Engineers INCOSE International Council on Systems Engineering ISO International Organization for Standardization IV&V Independent Verification and Validation
xv
MBSE Model Based Systems Engineering NHIN National Healthcare Information Network ONC The Office of the National Coordinator for Health IT PHI Protected Health Information PII Personally Identifiable Information PM/SEM Program Management and/or System Engineering and Management PPACA Patient Protection and Affordable Care Act PQC Process Quality Characteristic QHP Qualified Health Plan SBE State Based Exchange SE Systems Engineering SEH Systems Engineering Handbook SEM System Engineering Method SEMDAM System Engineering Method Diagnostic Assessment Model SHEII State Health Environment Information Integrity SOI System of Interest SoSE Systems of Systems Engineering SoSM System-of-Systems Methods SSM Soft Systems Methodology TSE Traditional Systems Engineering TSM Traditional Systems Methods USNHC United States National HealthCare
xvi
1 Introduction
Everybody Talks About the Weather, But Nobody Does Anything About It. Charles Dudley Warner Often attributed incorrectly to Mark Twain
Schlager wrote “increased complexity of systems … has led to an emphasis on the field of systems engineering” adding “the first need for systems engineering was felt when it was discovered that satisfactory components do not necessarily combine to produce a satisfactory system.” (Schlager, 1956) ISO/IEC/IEEE Standard 15288, Systems and software engineering – System life cycle process, states “the complexity of man- made systems has increased to an unprecedented level. This has led to new opportunities, but also to increased challenges for the organizations that create and utilize systems.”
(ISO/IEC/IEEE 15288:2015(E), 2015, p. vii) INCOSE wrote:
Some consider systems engineering to be a young discipline, while others consider it to be quite old. Whatever your perspective, systems and the practice for developing them has existed a long time. The constant through this evolution of systems is an ever-increasing complexity which can be observed in terms of the number of system functions, components, and interfaces and their non-linear interactions and emergent properties. Each of these indicators of complexity has increased dramatically over the last fifty years, and will continue to increase due to the capabilities that stakeholders are demanding and the advancement in technologies that enable these capabilities. (INCOSE, 2014, p. 13) {emphasis added}
von Bertalanffy wrote “modern technology and society have become so complex that the traditional branches of technology are no longer sufficient; approaches of a holistic or systems, and generalist and interdisciplinary, nature became necessary.” (von
Bertalanffy, The History and Status of General Systems Theory, 1972) MITRE wrote
“the complexity we are seeing in the enterprises and systems that MITRE helps engineer requires a spectrum of systems engineering techniques.” (MITRE, 2014, p. 37) Piaszczyk wrote “dealing with the complexity of modern systems requires a complete revision of
1
approaches and methods of systems engineering” adding “the old ways won’t do anymore.” (Piaszczyk, 2011)
Different classes of problems warrant different classes of solutions. Shenhar and
Bonen wrote “One of the difficulties in developing a better understanding of systems engineering is that little distinction has been made in the literature between the system type and its strategic type, or its systems engineering and managerial problems” adding
“both project management style and systems engineering practice differ with each specific kind of system and that management attitudes must be adapted to the proper system type.” (Shenhar & Bonen, 1997)
While IEEE describes the 15288 system life cycle processes as “a common framework” that “can be applied at any level in the hierarchy of a system’s structure,”
15288 doesn’t include guidance on identification or distinction of system type and/or strategic type stating “users of {15288} are responsible for selecting a life cycle model for the project and mapping the processes, activities, and tasks … into that model” adding
“The parties are also responsible for selecting and applying appropriate methodologies, methods, models and techniques suitable for the project.” (ISO/IEC/IEEE
15288:2015(E), 2015, p. 2) Orasanu and Shafto wrote that system design mistakes occur when people misclassify or misdiagnose situations. (Orasanu & Shafto, 2009)
Maier questioned “whether or not there is a useful taxonomic distinction between various complex, large-scale systems that are commonly referred to as systems-of- systems” adding “For there to be a useful taxonomic distinction, we should be able to divide systems of interest into two (or more) classes such that the members of each class
2
share distinct attributes, and whose design, development, or operations pose distinct demands.” (Maier, Architecting Principles for Systems-of-Systems, 1998)
Shenhar and Bonen wrote “Systems engineering and program management must be conducted according to the proper style and be adapted to the system type” adding “when a wrong style is utilized, or the when the system is misclassified, may result in substantial difficulties and delays in the process of the system creation.” (Shenhar & Bonen, 1997)
Gilbertson, Tanju and Eveleigh wrote “misclassification of systems may impact successful deployment.” (Gilbertson, Tanju, & Eveleigh, 2017) Maier described system misclassification as “incorrectly regarding a system-of-systems (SoS) as a monolithic system, or the reverse” warning future SEs “they may use inappropriate mechanisms for ensuring collaboration and may assume cooperative operations across administrative boundaries that will not reliably occur in practice.” (Maier, Architecting Principles for
Systems-of-Systems, 1998) Shenhar and Bonen wrote “adapting the wrong system and management style may cause major difficulties during the process of system creation.”
(Shenhar & Bonen, 1997) INCOSE describes a limitation of systems engineering practice in that it, “is only weakly connected to the underlying theoretical foundation” stressing that “understanding the foundation enables the systems engineer to evaluate and select from an expended and robust toolkit.” (INCOSE, 2014, p. 40)
Shenhar and Bonen wrote “the creation of complex man-made systems probably has its historical roots in early civilization.” (Shenhar & Bonen, 1997) INCOSE wrote
“advancements in technology not only impact the kinds of systems that are developed, but also the tools used by systems engineers” adding “system failures have provided lessons that impact the practice, and factors related to the work environment remind us
3
that systems engineering is a human undertaking.” (INCOSE, 2014, p. 13) Conway’s
Law states “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations” adding “the design which occurs first is almost never the best possible, the prevailing system concept may need to change.” (Conway, 1968)
The most specific description of the need for SE to understand the class of system problem (COSP) and select an appropriate system engineering method (SEM) is provided by DeRosa, Grisogono, Ryan, and Norman who posit that it is not possible to engineer complex systems using non-complex processes:
Real world problems that display certain properties, possess a degree of complexity that require a commensurately complex approach. Theoretical support for this assertion can be drawn from Ashby's Law of Requisite Variety, which demonstrates that a system must have sufficient variety – and consequently sufficient complexity – for the problem it is designed to solve, and from Bar Yam's proof that the functional complexity of a system scales exponentially with the complexity of environmental variables. Together, these theorems imply that genuinely complex needs can only be met by a sufficiently complex system. (DeRosa, Grisogono, Ryan, & Norman, 2008)
1.1 General Description of the Problem
One of the negative trends in the global environment affecting the state of SE practice is the lack of a set of consistent terminology and definitions. INCOSE wrote:
There is no agreed set of unified principles and models to support systems engineering use over a wide range of domains. Nor is there a set of consistent terminology and definitions. These two deficiencies impede the adoption of systems engineering and create problems. (INCOSE, 2007, p. 10)
Standards of SE practice from the world’s two leading engineering societies,
INCOSE and IEEE, address multiple SEMs without definition or selection criteria:
INCOSE addresses the application of SE principles for systems, systems of
systems (SoS) and enterprise systems (INCOSE SEH, 2015, pp. 8, 25, 175);
and,
4
IEEE highlights the need to understand the differences in engineering systems
and SoS with the inclusion of Annex G, Application of system life cycle
processes to a system of systems, warning “the complexity of the constituent
systems and the fact they may have been designed without regard to their role
in the SoS, can result in new, unexpected behaviors” adding “identifying and
addressing unanticipated emergent results is a particular challenge in
engineering SoS.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 102)
While SE has evolved from classical or traditional SE (TSE) to include additional SE bodies of knowledge, such as Systems-of-systems engineering (SoSE), there is no agreed upon typology of SEMs nor methodology for SEM selection leaving each SE practitioner to apply personal judgment in selecting and tailoring an appropriate SEM. (Gilbertson,
Tanju, & Eveleigh, 2017) IEEE specifically excludes the process of SEM selection writing “users of this International Standard {15288} are responsible for selecting a life cycle model for the project and mapping the processes, activities, and tasks … into that model” adding “The parties are also responsible for selecting and applying appropriate methodologies, methods, models and techniques suitable for the project.” (ISO/IEC/IEEE
15288:2015(E), 2015, p. 2) INCOSE uses System of Interest (SOI) as a generic undifferentiated system classification abstraction representing: system element, system, system-of-systems, and/or systems-of-systems, writing that a challenge of system definition “is to understand what level of detail is necessary” adding “because SOIs are in the real world, this means that the response to this challenge will be domain specific.”
(INCOSE SEH, 2015, p. 7) INCOSE continues:
The art of defining a hierarchy within a system relies on the ability of the systems engineer to strike a balance between clearly and simply defining span of control and
5
resolving the structure of the SOI into a complete set of system elements that can be implemented with confidence. (INCOSE SEH, 2015, p. 8)
Both societies cite complexity as a basis for selection, application, and tailoring of
SEMs and process(es); yet, neither includes a definition of complexity to apply. Both societies identify the importance of complexity:
INCOSE wrote “the appropriate degree of formality in the execution of any
SE process activity” is determined in part by “the degree of complexity” and
includes ‘complexity’ as one of the system fundamentals from a systems
science context perspective; (INCOSE SEH, 2015, pp. ix, 18)
INCOSE wrote “Because an SoS is itself a system, the systems engineer may
choose whether to address it as either a system or as an SoS, depending on
which perspective is better suited to a particular problem.” (INCOSE SEH,
2015, p. 8) adding, “appropriate degree of formality in the execution of any
SE process activity” should be based on the SE’s perceived need for
communication, level of uncertainty, degree of complexity and consequences
to human welfare.” (INCOSE SEH, 2015, p. ix)
IEEE wrote “the detail of the life cycle implementation within a project is
dependent upon the complexity of the work, the methods used, and the skills
and training of personnel involved in performing the work.” (ISO/IEC/IEEE
15288:2015(E), 2015, p. 24)
INCOSE wrote that “stakeholders are demanding increasingly capable systems that are growing in complexity, yet complexity-related system misunderstanding is at the root of significant cost overruns and system failures” adding “there is broad recognition that there is no end in sight to the system complexity curve.” (INCOSE, 2014, p. 29) INCOSE
6
wrote SE “must scale and add value to a broad range of systems, stakeholders, and organizations with diversity of size and complexity” while avoiding the perception of SE processes as “burdensome, heavyweight efforts, leading to unjustified cost and time overheads.” (INCOSE, 2014, p. 25; INCOSE, 2007, p. 14)
1.2 Major Research Questions
Delivery of a successful system at an acceptable cost is dependent upon use of a SEM from a typology of SEMs grounded in SE theory; appropriate for the COSP based on evidence and logic not characteristics and assumptions; identified using a demonstrable, diagnostic assessment model (DAM). Therefore, the research objectives are:
Development of a generalizable typology of existing and emerging set of
SEMs based on SE theory verified by descriptive method in Section 2.3.3,
Codifying TSM; Section 2.3.4, Codifying SoSM; Section 2.4.2, Codifying
ESM; Section 2.4.3, Codifying CSM; and Section 3.1.2.2, Defining candidate
explanations H1, …, Hn and qualitative method in Chapter 5;
Development of a COSP sensing framework verified by descriptive method in
Section 2.5, Cynefin sense-making Framework, and Section 3.1.2.2, Defining
candidate explanations H1, …, Hn and qualitative method in Chapter 5;
Development of a generalizable diagnostic assessment model (DAM) to
assess COSP and recommend a complexity appropriate SEM verified by
descriptive and correlational methods in Chapter 3 and qualitative method in
Chapter 4; and,
Demonstration of System Engineering Method Diagnostic Assessment Model
(SEMDAM) verified by qualitative method in Chapter 4.
7
1.3 Significance & Justification
Sheard et al., wrote “Systems engineers’ toolkits should include a wide range of methods and processes to address environmental and system complexity in appropriate and useful ways. (Sheard, et al., A Complexity Primer for Systems Engineers, 2016)
Delivery of a successful service or system at an acceptable cost is dependent upon selection and use of an appropriate SE method suitable for the problem. This research applies the Cynefin sense awareness framework to develop a diagnostic assessment model (DAM) using complexity measured by association of a priori prediction by program management (PM)/system engineering management (SEM) of system output or a posteriori perception by PM/SEM of system response to recommend an appropriate SE method to apply for a broad range of SE problems of increasing complexity. Sheard et al., wrote:
A key first step is one of diagnosis – the systems engineer must identify the kind and extent of complexity that bears on the problem set. As we have seen, complexity can exist in the problem being addressed, in its environment or context, or in the system under consideration for providing a solution to the problem. The diagnoses made will allow the systems engineer to tailor his/her approaches to key aspects of the systems engineering process: requirements elicitation, trade studies, the selection of a development process life cycle, solution architecting, system decomposition and subsystem integration, test and evaluation activities, and others. (Sheard, et al., A Complexity Primer for Systems Engineers, 2016)
Sheard wrote “no comprehensive measure of complexity is widely used in the systems engineering field today” but wrote “many benefits would come from having a well-understood way to quantify the complexity of a design or a development effort.”
Sheard concluded “this is a field of great promise. The questions of how to measure complexity and how to use the measure to mitigate project problems would have a huge impact if solved.” (Sheard S. A., Assessing the Impact of Complexity Attributes on
System Development Project Outcomes, 2012, pp. 19, 178)
8
1.4 Scope and Limitations
Using the Cynefin framework where complexity domains are associated with observation of cause and effect, SEMDAM provides a recommendation for a COSP appropriate SEM based on measuring statistically significant association for a priori prediction of system outcomes or a posteriori perception of system response. This research is based on observational study and not design of experiments; therefore,
SEMDAM’s statistical models look for correlation rather than causality. SEMDAM is based on abductive reasoning, also referred to as diagnosis, which typically begins with an incomplete set of observations and proceeds to the likeliest possible explanation.
Analogous to a medical diagnosis, SEMDAM does not guarantee selection of the “best”
SE method. Rather, SEMDAM provides a recommendation of an appropriate SE method based on the PM/SEM understanding of COSP. Since SEMDAM has both predictive and perspective elements, passage of time is simulated by evaluating current abductive logic statements as though they were conducted concurrently or previously as appropriate.
There is no consistent agreement within the SE community on a SE method ontology.
While there is broad SE community agreement on the need to extend SE to better cope with increasing complexity, there is no consensus on the boundaries or scope(s) of the emerging SE methods. This research into developing a decision analysis model for system selection considered TSM, SoSM, ESM, and CSM which may need to be revisited as existing SE methods evolve and/or new SE methods emerge. In addition to the complexity domains, the Cynefin framework describes transitions from one complexity domain to another which was beyond the scope of this research.
9
1.5 Overview of Dissertation
Modern practitioners of systems engineering have multiple SE methods available for use in delivering modern systems. While the concept of complexity is integral to understanding how and when to apply appropriate SE methods, there is no agreed upon definition of complexity nor is there established best practice for selecting an SE methodology. Assuming that traditional SE methodologies work for all systems is a high- risk approach. This dissertation develops a complexity based diagnostic assessment model, based on the Cynefin Framework, as a novel approach to recommend an appropriate SE domain to eliminate or reduce misclassifying systems and by extension system failure. An overview of this dissertation is presented using the Vee Model in
Figure 1-1 to provide perspective for the document chapters and appendix.
Figure 1-1: Research and Dissertation Organization for Developing System Engineering Method Diagnostic Assessment Model (SEMDAM)
10
Following this introductory chapter, this document is organized as follows:
Chapter 2, Literature Review, describes research related to the concepts of
systems, SE methodologies, and system complexity;
Chapter 3, SEMDAM Methodology, describes the research design, defines
SEMDAM, and describes attribute and statistical model selection;
Chapter 4, SEMDAM Applied to an Empirical Case Study, introduces and
analyzes the U.S. National Healthcare (USNHC);
Chapter 5, Synthesis & Discussion, summarizes the findings from application
to validate the SEMDAM methodology;
Chapter 6, Conclusions, presents conclusions and areas for future research;
Appendix A – COBPS, summarizes development of Consolidated OCR
Breach Summary (COBPS).
11
2 Literature Review
In a certain sense, it can be said that the notion of system is as old as European philosophy. Man, in early culture, and even primitives of today, experience themselves as being “thrown” into a hostile world, governed by chaotic and incomprehensible demonic forces which, at best, may be propitiated or influenced by way of magical practices. Philosophy and its descendant, science, was born when the early Greeks learned to consider or find, in the experienced world, an order or kosmos which was intelligible and, hence, controllable by thought and rational action. (von Bertalanffy, 1972)
Chapter 2 is organized around: philosophies and theories that provide a foundation for
SE; current and evolving SE methods (SEMs) that identify and describe bodies of SE expert knowledge, processes, and tools; and, a sense-awareness framework to identify the class of system problem (COSP). Contributions from other technical and managerial disciplines are presented within the scope of the associated SE topic most applicable.
Merriam-Webster defines theory as “a plausible or scientifically acceptable general principle or body of principles offered to explain phenomena.” (Merriam-Webster, 2017)
Nilsen wrote “a theory may be defined as a set of analytical principles or statements designed to structure our observation, understanding and explanation of the world” describing a theory as a combination of “definitions of variables, a domain where the theory applies, a set of relationships between the variables and specific predictions” adding “A ‘good theory’ provides a clear explanation of how and why specific relationships lead to specific events.” (Nilsen, 2015) Section 2.1, SE Theoretical
Foundations describes Theories from Philosophy, Theories from Classical Sciences, and
Theories from Systems Science.
INCOSE wrote “Systems engineering has evolved from a combination of practices used in a number of related industries” adding “SE practices are still largely based on heuristics.” (INCOSE SEBoK, 2016, p. 28) Merriam-Webster defines heuristic as
12
“involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods.” (Merriam-Webster, 2017) INCOSE wrote “Most systems engineers are practitioners, applying processes and methods… tailored to their domain’s unique problems” adding “these processes and methods have evolved to capture domain experts’ knowledge regarding the best approach to applying
SE.” (INCOSE SEBoK, 2016, p. 71)
SE literature contains a recurring bifurcation of systems methods exemplified by
Ramos, Ferreira and Barcelo who wrote, “Surprisingly, in a recent and evolving field, there are already references to ‘the old SE’ (or the traditional, the classical, the ordered) and ‘the new SE’.” (Ramos, Ferreira, & Barcelo, 2012) Section 2.3, Engineering Ordered
Systems (EOS), describes established SE methods proven effective in solving system and system-of-systems class problems using processes and methods based on theories from classical science subject to the assumptions that underlie those theories. Section 2.4,
Engineering Un-Ordered Systems (EUOS), describes emerging SE methods that are focused on enterprise and complex class problems using processes and methods based on the theories from systems science.
INCOSE wrote “Systems engineering continues to evolve in response to a long history of increasing system complexity” adding “Much of this evolution is in the models and tools focused on specific aspects of SE.” (INCOSE SEBoK, 2016, p. 28) Nilsen wrote:
A model typically involves a deliberate simplification of a phenomenon or a specific aspect of a phenomenon. Models need not be completely accurate representations of reality to have value. Models are closely related to theory and the difference between a theory and a model is not always clear. Models can be described as theories with a more narrowly defined scope of explanation; a model is descriptive, whereas a theory is explanatory as well as descriptive. (Nilsen, 2015)
13
The Oxford dictionary contains two definitions of system used in this research: (1) “a set of things working together as parts of a mechanism or an interconnecting network; a complex whole,” and (2) “a set of principles or procedures according to which something is done; an organized scheme or method.” (Oxford Dictionaries, 2017) As shown in
Figure 2-1, “system” describes the intended result (either a system or a service) per the first definition while “method” describes the use of a specific body of system engineering knowledge, discipline(s), process(es), framework(s), approach(s) or system development life cycle(s) (SDLC) to structure, plan, develop, and deliver a System per for the second.
Figure 2-1: A System Engineering “Method” is utilized to deliver a “System”
Traditional Systems Method (TSM) – This term is used to describe delivery of a
Traditional System (also called Classical system in literature) using Traditional
Systems Engineering (TSE) described in Section 2.3.3, Codifying TSM;
System-of-Systems Method (SoSM) – This refers to delivery of a System-of-
Systems (SoS) using Systems-of-Systems Engineering (SoSE) described in
Section 2.3.4, Codifying SoSM;
Enterprise Systems Method (ESM) – This refers to delivery an Enterprise System
(ES) using Enterprise Systems Engineering (ESM) described in Section 2.4.2,
Codifying ESM; and,
Complex Systems Method (CSM) – This refers to delivery of a Complex System
(CS) using Complex Systems Engineering (CSE) described in Section 2.4.3,
Codifying CSM.
14
This research leverages the Cynefin sense-awareness framework – described in
Section 2.5, Cynefin sense-making Framework – which uses complexity as the basis for categorization identifying the class of system problem (COSP) faced by a leader or decision maker. Nilsen wrote:
A framework usually denotes a structure, overview, outline, system or plan consisting of various descriptive categories, e.g. concepts, constructs or variables, and the relations between them that are presumed to account for a phenomenon. Frameworks do not provide explanations; they only describe empirical phenomena by fitting them into a set of categories. (Nilsen, 2015)
Chapter 2, Literature Review, addresses the “lack of a set of consistent terminology and definitions” for SEMs based on fundamental SE theories and the lack of a consistent approach to identify COSP based on a useful and measurable definition of system complexity and contains descriptive research on various types of Systems and associated types of System Engineering available at the time of this study as pretext for Section
3.1.2.2, Defining candidate explanations H1, …, Hn, and Section 3.1.2.3, Inferring COSP
‘Given Evidence E’.
2.1 SE Theoretical Foundations
Review of literature is an important aspect of SE research as Warfield wrote
“virtually every important concept that backs up the key ideas emergent in systems literature is found in ancient literature and in the centuries that follow.” (INCOSE SEH,
2015, p. 17) INCOSE wrote “To bridge the gap between different domains and communities of practice, it is important to first establish a well-grounded definition of the
‘intellectual foundations of systems engineering’, as well as a common language to describe the relevant concepts and paradigms” describing the need to “provide a framework and language that allow different communities, with highly divergent world-
15
views and skill sets, to work together for a common purpose.” (INCOSE SEBoK, 2016, p. 71)
Section 2.1 presents a subset of the theoretical foundations of SE organized by historical grouping of philosophers and/or theories with similar worldviews or
Weltanschauung. Section 2.1.1, Theories from Philosophy, presents historical theories of
Teleology and Vitalism. Section 2.1.2, Theories from Classical Sciences, presents recent theories of Mechanism and Evolution that to adhere to the ‘system-as-machine’ paradigm and provide the foundation for Engineering Ordered Systems (EOS). Section 2.1.3,
Theories from Systems Science, presents modern theories (i.e., General Systems Theory,
Cybernetics, System Dynamics, etc.) that adhere to the ‘system-as-organism’ paradigm and provide the theoretical foundation for Engineering Un-Ordered Systems (EUOS).
While Complexity Theory is part of Systems Science, due to the importance of complexity in applying the Cynefin framework and developing SEMDAM, Section 2.6,
Defining System & Model Complexity, provides an in-depth treatment of the topic and presents the COSP Complexity construct used in this research.
2.1.1 Theories from Philosophy
Teleology and Vitalism are based on entelechy which is defined as “that which realized or makes actual what is otherwise merely potential.” (Encyclopaedia Britannica,
2017) Gorod, Gandhi, White, Ireland and Sauser wrote “the Greek word sustema stood for reunion, conjunction, or assembly” adding “the concept of system surfaced during the seventeenth century, meaning a collection of organized concepts mainly in the philosophical sense.” (Gorod, Gandhi, White, Ireland, & Sauser, 2015, p. 19) While
16
falling into disfavour with the creation and adoption of theories from Classical Science,
Aristotle’s core concept of emergence experienced a resurgence with Systems Science.
2.1.1.1 Teleology
Aristotle is credited with developing the theory of teleology where each thing {e.g., system} contains a reason or explanation of its end, purpose, or goal. Aristotle taught that all things have a combination of essential properties, which define that thing, and accidental properties which are free to vary. (Dennett, 1996) One of the earliest documented references to this concept of system is from Aristotle in 350 BC, “In the case of all things which have several parts and in which the totality is not, as it were, a mere heap, but the whole is something beside the parts.” (Aristotle, 350 B.C.) von Bertalanffy paraphrased Aristotle’s concept of emergence as “the whole is more than the sum of its parts” and wrote that it “is a definition of the basic system problem which is still valid.”
(von Bertalanffy, The History and Status of General Systems Theory, 1972)
Several centuries later, Kurt Koffka described the situation when “a perceptual system forms a percept or gestalt, the whole thing has a reality of its own, independent of the parts,” which was captured in the phase “the whole is other than the sum of its parts.”
(Dewey, 2007) Peterson described that Aristotle’s concept of entelechy remain valid when he wrote “as pointed out as early in the writing of Aristotle, it is exactly the identification of purpose with the organization of components that defines the approach that we refer to as systems engineering.” (Patterson, 2009, p. 65)
2.1.1.2 Vitalism Theory
Weckowicz wrote “The vitalistic theory of life proposed by Georg Ernst Stahl was a reaction to the simplistic mechanistic theories of the seventeenth century” adding “Stahl
17
in his Theoria Medica Vera (1707) postulated the existence of a vital force, the vitalistic essence, called by him 'soul,' which characterized all living organisms, and more generally all living matter in contradistinction to inanimate matter. The vital force was underlying all life phenomena.” (Weckowicz, 2000) Weckowicz wrote:
Throughout the nineteenth century the field of biology was dominated by the controversy between Mechanistic and Vitalistic theories of life. According to the Mechanists life processes could be completely reduced to physico-chemical events, which meant that they could be fully explained by the laws of physics and chemistry. Consequently, life processes could be fully accommodated within the framework of classical thermodynamics. According to the Vitalists these processes could not be so explained. (Weckowicz, 2000)
The theory of Teleology was a predominate belief system for several thousand years.
Eventually theories from Classical Sciences, discussed next, displaced theories from
Philosophy as part of a larger movement towards classical sciences.
2.1.2 Theories from Classical Sciences
von Bertalanffy wrote “the Scientific Revolution of the sixteenth-seventeenth centuries replaced the descriptive-metaphysical universe epitomized in Aristotle’s doctrine by the mathematical-positvistic or Galilean concept” adding” the vision of the world as a teleological cosmos was replace by the description of events in causal, mathematical laws.” The key ideas within the classical sciences are represented by
Descartes’ second maxim to “break down every problem into as many separate simple elements as might be possible” and Galileo’s resolutive methods to “reduce complex phenomena into elemental parts and processes.” (von Bertalanffy, The History and Status of General Systems Theory, 1972)
Descartes’ postulated that the love of knowledge, and its exploration, by all thinkers would lead to limited progress so he proposed a better approach based on specialization where individuals would focus in specific areas of knowledge. Gorod, White, Ireland,
18
Gandhi and Sauser wrote “this separation, and the related notion of a hierarchy of knowledge, is called reductionism” adding “the implication of reductionism is that a system is no more than the sum of its parts.” (Gorod, White, Ireland, Gandhi, & Sauser,
Preface, 2015, p. xi) The primary theories from Classical Sciences, Mechanism, and
Natural Selection, are described below.
2.1.2.1 Mechanism Theory
Man-made machines, such as steam or combustion engines became the models of living systems. The functioning of these machines could be understood within the framework of classical thermodynamics formulated by Robert Mayer in 1842.
(Weckowicz, 2000). It was hoped by biologists and by physiologists that living systems would also fit the framework of classical thermodynamics and that they could be understood in mechanistic terms as physico-chemical machines. (Weckowicz, 2000)
Weckowicz wrote:
In the seventeenth century under the influence of the Cartesian, Galilean and Newtonian theories in physics, and pioneering discoveries in chemistry a tendency developed to explain life processes in mechanistic terms. Either as systems of pulleys and levers, or as hydraulic systems activated by pressure of fluids. Alternative explanations were offered by chemists who explained life processes in terms of fermentations or in terms of acids interacting with alkali. (Weckowicz, 2000)
von Bertalanffy described mechanism as “robot model” where behavior was
“explained by the mechanistic stimulus-response schedule; conditions, according to the pattern of animal experiment, appears as the foundation of human behavior; ‘meaning’ was replaced by conditioned response; and specificity of human behavior was to be denied.” (von Bertalanffy, General System Theory, 1972, p. 188)
19
2.1.2.2 Theory of Evolution
Darwin’s Theory of Evolution, The Origin of Species, was published in 1859.
Darwin’s objective was to identify one classification scheme, based on a historical approach where “species are not eternal and immutable; they have evolved over time and can give birth to new species in turn.” (Dennett, 1996) Arnold and Fristrup wrote
“hierarchical structure has long been fundamental to our understanding of biology, both in the anatomy of individuals and in the systematic classification of individuals into higher level aggregates” adding “species, populations, individuals, and genes are widely recognized as fundamental expressions of the evolutionary process.” (Arnold & Fristrup,
1982) Dennett wrote:
Darwin succeeded not only because he documented his ideas exhaustively but also because he grounded them in a powerful theoretical framework. In modern terms, he had discovered the power of an algorithm which is a formal process that can be counted on to yield a certain kind of result whenever it is ‘run’ or instantiated. (Dennett, 1996)
Darwin attributed development of new species to: accumulation of chance minimal variations; to random genetic mutations; and to the pressure of natural selection.
(Weckowicz, 2000)
2.1.3 Theories from Systems Science
INCOSE wrote “systems science is an integrative discipline bringing together research of systems with the goal of identifying, exploring, and understanding patterns of complexity to provide a common language and an intellectual foundation for systems engineering” adding “Research in systems science attempts to compensate for the inherent limitations of classical science, most notably the lack of ways to deal with emergence.” (INCOSE SEH, 2015, p. 18)
20
While Classical Science theory may be summed up by the phrase “a system is no more than the sum of its parts,” Systems Science takes a very different perspective that may be summarized as “the whole is more than the sum of its parts.” von Bertalanffy wrote “in order to understand an organized whole we must know both the parts and the relations between them.” (von Bertalanffy, The History and Status of General Systems
Theory, 1972)
INCOSE wrote “The systems approach (derived from systems thinking) and systems engineering (SE) have developed and matured, for the most part, independently” adding
“therefore, the systems science and the systems engineering communities differ in their views as to what extent SE is based on a systems approach and how well SE uses the concepts, principles, patterns and representations of systems thinking.” (INCOSE
SEBoK, 2016, p. 179)
2.1.3.1 General System Theory (GST)
General System Theory (GST) was codified when Braziller published von
Bertalanffy’s General System Theory: Foundations, Development, Applications in 1972.
(von Bertalanffy, General System Theory, 1972) This book consolidated papers previously published from 1940 to 1969. (von Bertalanffy, The History and Status of
General Systems Theory, 1972) Much of the modern SE body of knowledge leverages the concept of system that is attributed to the work of von Bertalanffy (1901 - 1972) who wrote “If you take any realm of biological phenomena … you will always find that the behavior of an element is different within from what it is in isolation” adding “You cannot sum up the behavior of the whole from the isolated parts.” (von Bertalanffy,
General System Theory, 1972, p. 68)
21
A biologist, von Bertalanffy, developed general systems theory (GST) while,
“attempting to build a bridge between natural sciences and humanities.” (Weckowicz,
2000) When von Bertalanffy started writing about general system theory in the 1940s relatively little attention was paid to him; however, scientists have become interested in his research and impressed by this promising effort to find common laws applying to such widely diverse subjects as biology, economics, psychology, and demography.
Developments in information theory, computer and technology, and cybernetics are related to general system theory and have contributed to it. (Weckowicz, 2000) wrote:
Throughout the nineteenth century the field of biology was dominated by the controversy between Mechanistic and Vitalistic theories of life. According to the Mechanists life processes could be completely reduced to physico-chemical events, which meant that they could be fully explained by the laws of physics and chemistry. Consequently, life processes could be fully accommodated within the framework of classical thermodynamics. According to the Vitalists these processes could not be so explained. The Vitalists assumed that in life processes in addition to the physical and chemical forces there was operating a specific agent which was only present in living matter. (Weckowicz, 2000)
Ludwig von Bertalanffy is mainly remembered as the originator of the open systems theory in biology, a theory which rejected both the mechanistic and the vitalistic explanations of life processes. (Weckowicz, 2000) von Bertalanffy wrote the Theory of
Open Systems “consists of the scientific exploration of "wholes" and “wholeness” which, not so long ago, were considered to be metaphysical notions transcending the boundaries of science.” (von Bertalanffy, The History and Status of General Systems Theory, 1972)
2.1.3.2 Cybernetics Theory
Weiner first coined the term “cybernetics” in 1948 to describe “the most fruitful areas for the growth of the sciences were those which had been neglected as a no-mans’ land between various established fields” adding “there are fields of scientific work what have been explored from different sides of pure mathematics, statistics, electrical engineering,
22
and neurophysiology; in which every single notion receives a separate name from each group.” (Wiener, 1961, p. 2) von Bertalanffy wrote that cybernetics “is a theory of control systems based on communication (transfer of information) between system and environment and within the system, and control (feedback) of the system’s function in regard to environment.” (von Bertalanffy, General System Theory, 1972, p. 21)
Ashby published An Introduction to Cybernetics in 1956 writing “Cybernetics was defined by Wiener as ‘the science of control and communication, in the animal and the machine – in a word, as the art of steermanship, and it is to this aspect that the book will be addressed.” (Ashby, 1956, p. 1) Ashby wrote “we shall examine the process of regulation itself, with the aim of finding out exactly what is involved and implied” adding
“we shall develop ways of measuring the amount or degree of regulation achieved, and we shall show that this amount has an upper limit.” (Ashby, 1956, p. 202) Ashby’s states the Law as “only variety can destroy variety;” (Ashby, 1956, p. 207) however, a more useable representation is that in order for a system to remain stable, the variety in the regulator V must be at least as great as the variety in the disturbance being regulated V .
Ashby’s Theory of Requisite Variety, shown in Figure 2-2, states:
V ≥ V − V (1)
Where:
V = Variety of Outcome measured logarithmically (i.e., Output)
V = Variety of Disturbance measured logarithmically (i.e., Input)
V = Variety of Regulator measured logarithmically (i.e., Feedback) 푇 = Table (i.e., System Function)
23
Figure 2-2: Graphical View Ashby’s Theory of Requisite Variety
Ashby’s Theory of Requisite Variety defines the set of states of operation for systems shown in Table 2-1.
Table 2-1: Ashby’s Defined System States System Condition Meaning States The regulator contains requisite variety to Stable VO ≥ VD – VR control the outcome for a given disturbance The regulator does not contain requisite variety Unstable VO < VD – VR to control the outcome for a given disturbance
2.1.3.3 Hard & Soft Systems Perspectives Methodologies
INCOSE SEBoK wrote “Hard systems views of the world are characterized by the ability to define purpose, goals, and missions that can be addressed via engineering methodologies in an attempt to, in some sense, “optimize” a solution” adding “In hard system approaches the problems may be complex and difficult, but they are known and can be fully expressed by the investigator.” (INCOSE SEBoK, 2016, p. 108)
INCOSE SEBoK wrote “Soft systems views of the world are characterized by complex, problematical, and often mysterious phenomena for which concrete goals cannot be established and which require learning in order to make improvement” adding
“Soft system approaches reject the idea of a single problem and consider problematic
24
situations in which different people will perceive different issues depending upon their own viewpoint and experience.” (INCOSE SEBoK, 2016, p. 108) Checkland wrote:
The word ‘system’ is used simply as a label for something taken to exist in the world outside ourselves. The taken-as-given assumption is that the world can be taken to be a set of interacting systems, some of which do not work very well and can be engineering to work better. In the thinking embodied in SSM the taken-as-given assumptions are quite different. The world is taken to be very complex, problematic, mysterious. However, our coping with it, the process of inquiry into it, it is assumed, can itself be organized as a learning system. Thus, the use of the work ‘system’ is no longer applied to the world, it is instead applied to the process of our dealing with the world. It is the shift of systemicity (or ‘systemness’) from the world to the process of inquiry into the world which is the crucial intellectual distinction between the two fundamental forms of systems thinking, ‘hard’ and ‘soft’.
It the literature it is often stated that ‘hard’ systems thinking is appropriate in well- defined technical problems and that ‘soft’ systems thinking is more appropriate in fuzzy ill-defined situations involving human beings and cultural consideration. (Checkland, 2000)
2.1.3.4 Systems Thinking Methodology
Systems thinking is a methodology for considering an entire system by viewing inputs that are processed to generate outputs. According to INCOSE, “systems thinking is the discovery of patterns” adding that a systems thinker “identifies the circular nature of complex cause-and-effect relationships.” (INCOSE SEH, 2015, p. 20) Senge wrote that,
“businesses and other human endeavors are also systems … bound by invisible fabrics of interrelated actions, which often take years to fully play out their effects on each other” adding “systems thinking is a conceptual framework, a body of knowledge and tools that has been developed over the past fifty years, to make the full patterns clearer, and to help us see how to change them effectively.” (Senge, 2000)
2.1.3.5 Complexity Theory
Sheard et al., wrote, “complexity is nothing new to systems engineering and managers” adding, “in ordinary language, we often call something complex when we
25
can’t fully understand its structure or behavior: it is uncertain, unpredictable, complicated or just plain difficult.” (Sheard, et al., INCOSE Complex Systems Working Group White
Paper - A Complexity Primer for Systems Engineers, 2015) The definition of complexity used in this research and its’ derivation is presented in Section 2.6, Defining System &
Model Complexity, below.
Manson wrote “advocates of complexity theory see it as a means of simplifying seemingly complex systems. The actual practice of complexity theory, however, is anything but simple in that there is no one identifiable complexity theory.” (Manson,
2001) While complexity theory began as a new way of working with mathematical models, the body of knowledge has adopted the precept that complexity is more a way of thinking about the world. Like SE, theoretical research into complexity has multiple variations or areas of interest. The next set of complexity theories are variations of theoretical research into the general subject of complexity each of which has been adapted into SE literature.
2.1.3.6 Computational Complexity Theory
Dodder and Dare wrote “one apparently crucial element in any reasonable measure of complexity is the information processed or exchanged by the system under study” adding
“Shannon’s information theory uses this quantity of information as an indicator of complexity. Another widely explored measure is the Algorithmic Information Content
(AIC), which relates complexity to the minimum amount of information needed to describe the system, as measured by the shortest computer program that can generate that system.” (Dodder & Dare, 2000) Langton researched cellular automata (CA) which he defined as “discrete space/time logical universes, obeying their own local physics”
26
adding “cellular automata can be viewed either as computers themselves or as logical universes, within which computers may be embedded.” (Langton, 1990, pp. 25, 35)
Langton published his dissertation on complexity science, The Edge of Chaos, which
“exhibits the nature of a complex system by neither order nor disorder.” (Gorod, Gandhi,
White, Ireland, & Sauser, 2015, p. 22) Langton’s research focused on providing
“evidence for the existence of ‘critical’ phase-transitions in the space of CA and for the identification of a transition regime with the most complex CA dynamics, those which support universal computation” adding “it means that information processing is likely to become an important factor in the dynamics of physical systems when they are in the vicinity of a phase-transition between ordered and disordered behavior.” (Langton, 1990, p. 6) Langton wrote “there have been other attempts to characterize those CA rules that would support universal computation” adding “the best known of these is due to Wolfram who proposed four qualitative classes of CA behavior.” (Langton, 1990, p. 3) Langton summarized the primary finding writing:
By defining the appropriate parameter {Lambda} over the space of possible CA rules, and by using this parameter to step through the space of CA rules in an ordered fashion, one passes through the spectrum of dynamical behaviors in the following order:
Fixed-point =» periodic =» “complex” =» chaotic
This corresponds to passing through the Wolfram classes in the following order:
I =» II =» IV =» III
This association between phase-transitions and computations provides a new perspective on computation in general, one which reveals a simple and elegant structure among what previously amounted to a large and unorganized collection of loosely related theorems, lemmas, theses, and observations. (Langton, 1990, p. 7)
Langton asserts that “there is a fundamental connection between phase-transitions and computations – in particular between critical dynamics and universal computation.”
27
(Langton, 1990, p. 5) Table 2-2 contains the spectrum of phase transitions for CA for both Langton and Wolfram.
Table 2-2: Langton & Wolfram’s Defined States for CA that support Universal Computation Langton’s Wolfram’s Meaning CA States CA States Fixed point Class I Relax to a homogeneous fixed-point Relax to a heterogeneous fixed point or to short- Periodic Class II period behavior Support complex interactions between localized Complex Class IV structures, often exhibiting long transients Chaotic Class III Relax to chaotic, random behavior
2.1.3.7 Complexity in Organizational Theory
Conway wrote “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations” adding
“This criterion creates problems because the need to communicate at any time depends on the system concept in effect at that time. Because the design which occurs first is almost never the best possible, the prevailing system concept may need to change.”
(Conway, 1968)
Rhodes, Lamb and Nightingale wrote “society as a whole is faced with increasing complex systems problems in critical infrastructure, energy, transportation, communications, defense, and other areas” describing empirical research on systems thinking and the practice of systems engineering practice as “understanding how to achieve more effective SE practice through understanding of the context in which SE is performed and the factors underlying the competency of the systems workforce.”
(Rhodes, Lamb, & Nightingale, 2008) Understanding ‘the context in which SE is performed’ is discussed in Section 2.2.3, System Structure, below.
28
Towers described enablers for model based systems engineering (MBSE) as: architecture frameworks, process frameworks, people, and tools where the people involved have the appropriate workforce competencies shown in Table 2-3.
Table 2-3: Towers Levels of Appropriate Competencies Practices Work Type Skill Level How to Achieve Best “Assembly Line” Proficiency Training Good Information Fluency Training & Experience Emergent Knowledge Literacy Deliberate Practice Novel Concept Mastery Deliberate Practice (10,000 hrs) (Towers, 2016, p. 17)
Gladwell provides verification for Towers’ uniquely specific association of mastery at
10,000 hours writing “the idea that excellence at performing a complex task requires a critical minimum level of practice surfaces again and again in studies of expertise.
Researchers have settled on what they believe is the magic number for true expertise” adding “the emerging picture from studies in that ten thousand hours of practice is required to achieve the level of mastery associated with being a world-class expert in anything.” (Gladwell, 2008, p. 40)
2.1.3.8 Complex Adaptive Systems Theory
Gorod, Gandhi, White, Ireland, and Sauser wrote “the genesis of complex systems theory can be traced back to the cybernetics movement which started during World War
II” citing the works of Weiner and Ashby presented in Section 2.1.3.2, Cybernetics
Theory, adding “Langton (1990) called the complexity science “The Edge of Chaos,” which exhibits the nature of a CS (Complex System) as characterized by neither order nor disorder.” (Gorod, Gandhi, White, Ireland, & Sauser, 2015, p. 19)
29
Dodder and Dare wrote “The rise of complex adaptive systems (CAS) as a school of thought took hold in the mid-1980s with the formation of the Santa Fe Institute, a New
Mexico think tank formed in part by former members of the nearby Los Alamos National
Laboratory” adding “One important emphasis with CAS is on the crossing of traditional disciplinary boundaries. CAS provides an alternative to the linear, reductionist thinking that has ruled scientific thought since the time of Newton.” (Dodder & Dare, 2000)
Hayenga cited Kelly (1994) when he wrote “the key insight uncovered by the study of complex systems in recent years is this: the only way for a system to evolve into something new is to have a flexible structure” adding “a tiny tadpole can change into a frog, but a 747 Jumbo Jet can’t add six inches to its length without crippling itself.”
(Hayenga, 2008)
Sheard and Mostashari wrote “Management and technical work are much more meshed in complex systems work than in standard mechanistic engineering of small systems” recommending that technical managers “plot a complex situation on the
Cynefin instrument to identify patterns of problems and solutions.” (Sheard &
Mostashari, Principles of Complex Systems for Systems Engineering, 2009) The Cynefin sense-awareness framework is described in Section 2.5, Cynefin sense-making
Framework, below.
2.1.3.9 Chaos Theory
Chaos theory is a sub-discipline of the more general complexity theory that uses mathematics to study complex systems that are highly sensitive to initial conditions or rounding errors within computational systems – sometimes referred to as the butterfly effect – even when the underlying system is deterministic. Originally developed at MIT
30
to study weather, the theory was summarized by Lorenz who wrote “Chaos: When the present determines the future, but the approximate present does not approximately determine the future.” (Lorenz, 1963)
2.1.4 Summary of SE Theoretical Foundations
Much of the body of SE knowledge contained within IEEE 15288 and INCOSE SEH is based on the theories from classical sciences. Sheard identified that systems engineering practices deal for the most part with improving order primarily by using the same set of assumptions of order from Newtonian and Mechanical Systems. Sheard called this ordered systems engineering (OSE) which is discussed in in Section 2.3,
Engineering Ordered Systems (EOS). (Sheard S. , Complex Adaptive Systems in Systems
Engineering and Management, 2009, p. 1284) Sillitto wrote about the multiple fields of study ascribed to systems science and potential application to SE when he wrote:
Reviewing these lists and discussions, and following a taxonomy set out by Bertalanffy it seems that they can be aggregated into four broad categories:
Science of real systems governed by the laws of physics,
Science of real systems governed by the “laws” of biological,
Science of real systems governed by the “laws” of social behaviour; and,
Science of conceptual systems, governed by the laws of mathematics and logic.
These can be referred to in short-hand as the science of “physical”, “biological”, “social” and “conceptual” systems. (Sillitto H. , 2012)
Sheard and Mostashari wrote “many aspects of the practice of systems engineering have become standardized across the industry. This is not to suggest, however, that this practice{SE} is based on an overarching and complete systems engineering theory”
31
adding “theory has found too little traction in improving systems engineering practice.”
(Sheard & Mostashari, A Complexity Typology for Systems Engineering, 2010)
2.2 Definition of System Used
Recall that system is used interchangeably to either identify a thing “set of things working together as parts” (i.e., engineering a system, system of interest, enabling system) or “a set of principles or procedures” (i.e., systems engineering, systems analysis, systems thinking, etc). (Oxford Dictionaries, 2017) IEEE 15288 demonstrates the various uses writing “Systems Engineering is an interdisciplinary approach and means to enable the realization of successful systems” or “This International Standard provides a common process framework for describing the life cycle of systems created by humans, adopting a
Systems Engineering approach.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. vii) The last example shows Systems associated with life cycle which is discussed below.
Given the potential for confusion, this research will annotate external references use of system as necessary to associate meaning from: system (thing), system (principles) or system (processes). Unless otherwise noted, use of the phrase system refers to system
(thing) which aligns with as “combination of interacting elements organized to achieve one or more stated purposes.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 9) Sillitto wrote and INCOSE incorporated that a system {thing} has the following:
A life cycle – Life Cycle is used throughout SE literature as either the phases a project pass thorough, principles and/or procedures; (Sillitto H. , 2012)
Function – What a system does; systems exist to deliver functionality; (Sillitto H. G., 2009)
Structure – Defined as a boundary, a set of parts, and the set of relationships and potential interactions between the parts of the system and across the boundary (interfaces); (INCOSE SEH, 2015, p. 20; Sillitto H. , 2012)
32
Behavior – the way in which someone or something functions or acts or reacts to a stimulus, or responds to situations; Including state change and exchange of information, energy and resources; and, (Sillitto H. G., 2009; Sillitto H. , 2012)
Performance characteristics – associated with function and behavior in given environmental conditions and system states. (INCOSE SEH, 2015, p. 20; Sillitto H. G., 2009; Sillitto H. , 2012)
INCOSE wrote “Systems contain multiple feedback loops with variable time constants” adding “cause‐and‐effect relationships may not be immediately obvious or easy to determine.” (INCOSE SEH, 2015, p. 21)
2.2.1 System Life Cycle Model
INCOSE wrote “Every man‐made system has a life cycle, even if it is not formally defined” adding “A life cycle can be defined as the series of stages through which something (a system or manufactured product) passes.” (INCOSE SEH, 2015, p. 25)
IEEE wrote “every system has a life cycle. A life cycle can be described using an abstract functional model that represents the conceptualization of a need for the system, its realization, evolution and disposal” adding “The organization may then employ this environment to perform and manage its projects and progress systems through their life cycle stages.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 9)
The phase life cycle is used throughout SE literature to convey either the phases a project pass thorough, principles and/or procedures. IEEE 15288 wrote “Life cycles vary according to the nature, purpose, use and prevailing circumstances of the system. Each stage has a distinct purpose and contribution to the whole life cycle and is considered when planning and executing the system life cycle” adding “the typical system life cycle stages include concept, development, production, utilization, support, and retirement.”
(ISO/IEC/IEEE 15288:2015(E), 2015, p. 14) Inclusion of typical life cycle stages is discordant with the warning provided regarding life cycle models:
33
This International Standard does not prescribe a specific system life cycle model, development methodology, method, model or technique. The users of this International Standard are responsible for selecting a life cycle model for the project and mapping the processes, activities, and tasks in this International Standard into that model. The parties are also responsible for selecting and applying appropriate methodologies, methods, models and techniques suitable for the project. (ISO/IEC/IEEE 15288:2015(E), 2015, p. 2)
The use of life cycle appears to refer to both the life cycle phases and life cycle models with ‘defined series of stages’ {when tasks are performed} and the SE methodology, defined as a collection of related processes, methods, and tools used during each state
{how tasks are performed}. (INCOSE SEH, 2015, p. 90) This research posits that a system should not be viewed as having a life cycle; rather, selection of an appropriate life cycle {method} should depend on the system. Likewise, external reference to life cycle may be annotated to indicate life cycle (phases) or life cycle (models). Unless otherwise noted, use of Life Cycle means life cycle (model) while System Engineering Method
(SEM) is intended to refer to system (processes) and/or life cycle (model).
2.2.2 System Function
System function is the embodiment of stakeholder needs and the requirements definition process which “defines the stakeholder requirements for a system than can provide the capabilities needed by users and other stakeholders in a defined environment.” (INCOSE SEH, 2015, p. 52) INCOSE describes system function as “the functional boundaries of the system and the functions the system must perform,”
(INCOSE SEH, 2015, p. 281) adding:
The functionality of a system is typically expressed in terms of the interactions of the system with its operating environment, especially the users. When a system is considered as an integrated combination of interacting elements, the functionality of the system derives not just from the interactions of individual elements with the environmental elements but also from how these interactions are influenced by the organization (interrelations) of the system elements. (INCOSE SEH, 2015, p. 6)
34
While this definition of system functionality “speaks to both the internal and external views of the system,” (INCOSE SEH, 2015, p. 6) it is based on a ‘defined environment’ which is based on defining the system boundary which is part of System Structure, discussed next.
2.2.3 System Structure
Three concepts define system structure: the boundary; the elements and the interactions between and among the elements and in and out of the boundary. (INCOSE
SEH, 2015, p. 20) INCOSE wrote “The internal and external views of a system give rise to the concept of a system boundary. In practice, the system boundary is a “line of demarcation” between the system itself and its greater context (to include the operating environment)” adding “It defines what belongs to the system and what does not.”
(INCOSE SEH, 2015, p. 20) Sillitto wrote “Sometimes we are interested in a particular property of interest” adding “Once we have established the property of interest, the system of interest and corresponding system boundary can be determined by finding the set of parts and relationships that are necessary and sufficient to account for the property or properties of interest.” (Sillitto H. , 2012) INCOSE SEBoK described the process of identifying a System of Interest (SOI) writing:
When humans observe or interact with a system, they allocate boundaries and names to parts of the system. This naming may follow the natural hierarchy of the system, but will also reflect the needs and experience of the observer to associate elements with common attributes of purposes relevant to their own. This way of observing systems wherein the complex system relationships are focused around a particular system boundary is called systemic resolution. (INCOSE SEBoK, 2016, p. 121)
IEEE 15288 defines systems elements as “hardware, software, data, humans, processes (e.g., processes for providing service to users), procedures (e.g., operator instructions), facilities, materials and naturally occurring entities.” (ISO/IEC/IEEE
35
15288:2015(E), 2015, p. 1) INCOSE SEH wrote “A system is made up of parts that interact with each other and the wider context” adding “The parts may be any or all of hardware, software, information, services, people, organizations, processes, services, etc.” (INCOSE SEH, 2015, p. 79) Interactions include both exchange of information with external systems across the system boundary and interactions among and between system elements within the system boundary.
2.2.4 System Behavior
System behavior includes state changes and exchange of information. System
Behavior is related to and dependent on system function. (INCOSE SEH, 2015, p. 20)
INCOSE wrote “a system is in a state when the values assigned to its attributes remain constant or steady for a meaningful period of time.” (INCOSE SEH, 2015, p. 6) INCOSE wrote “Systems contain multiple feedback loops with variable time constants, so that cause‐and‐effect relationships may not be immediately obvious or easy to determine.”
(INCOSE SEH, 2015, p. 21) Figure 2-3 shows an updated system representation using the system nomenclature of this section integrated with the graphical representation of
Ashby’s Theory of Requisite Variety.
Figure 2-3: System with Ashby’s Theory of Requisite Variety
36
Observing system behavior as states or state changes is integral to the SEMDAM requirement to be able to predict a future response or perceive a trend in a series of recent outputs. This concept of “meaningful period of time” raises the question of duration of observation or window of time which will be addressed in Section 2.6, Defining System
& Model Complexity. Ashby wrote “the most fundamental concept in cybernetics is that of difference, either that two things are recognizably different or that one thing has changed with time” adding “Every machine or dynamic system has many distinguishable states. If it is a determinate machine, fixing its circumstances and the state it is at will determine, i.e., make unique, the state it next moves to.” (Ashby, 1956, pp. 1, 27) This research is concerned with either an a priori prediction of future system outcomes or a perception of a posteriori trends as shown in Figure 2-4.
Figure 2-4: Making a Prediction or Perceiving a Trend
2.3 Engineering Ordered Systems (EOS)
Sheard wrote “systems engineering practices, as they are currently taught, address mostly the ‘order’ aspect of complex systems” in order “to increase predictability, control, system performance, and success of system development efforts” adding “we analyze using Gaussian or bell curve distributions and we plan, predict, and control in order to achieve a desired outcome.” (Sheard S. , Complex Adaptive Systems in Systems
37
Engineering and Management, 2009, p. 1283) Schlager wrote “the first need for systems engineering was felt when it was discovered that satisfactory components do not necessarily combine to produce a satisfactory system” adding “it should not seem unusual to the find the first appearance of the term systems engineering in those industries which produced systems complexity at an early date” citing Bell Telephone Laboratories.
(Schlager, 1956)
Commercial use of the telephone began in 1876 between two instruments; however, it wasn’t until the 1940s that Bell Telephone Laboratories coined the phrase ‘systems engineering’ to describe the conception, design, development, production and operation of a national phone network comprised of physical telephony components (e.g., instruments {phones}, transmissions lines, switches, etc.) that had been in use in various forms and configurations for over 50 years.
INCOSE wrote “In the Apollo program, NASA added a ‘Module Level’ in the hierarchy to breakout the Command Module, Lunar Module, etc. of the Space Vehicle
Element” noting that “Simple systems typically have fewer levels in the hierarchy than complex systems.” This led to the introduction of Systems‐of‐Systems (SoS) as “systems‐ of‐interest whose system elements are themselves systems; typically, these entail large‐ scale inter‐disciplinary problems involving multiple, heterogeneous, distributed systems.
These interoperating collections of component systems usually produce results unachievable by the individual systems alone.” (INCOSE SEH, 2011, p. 11) Complex and complicated are not synonymous. Hayenga wrote “quite often, we hear systems with many thousands of interacting parts, like the Space Shuttle or B-2 bomber, described as complex” adding “we need to consider those with many interacting parts to be
38
complicated rather than complex.” (Hayenga, 2008) DeRosa, Grisogono, Ryan and
Norman wrote:
Complex and complicated systems are often confused. The essence of complexity is interdependence. Interdependence implies that reduction by decomposition can't work, because the behavior of each component depends on the behaviors of the others. The Latin root of complex means to weave, whereas the root of complicated means to fold. Complicated systems can be unfolded into simpler components- decomposition works, while complex ones cannot be so easily unwoven. Thus, the opposite of complicated is simple, while the opposite of complex is independent. (DeRosa, Grisogono, Ryan, & Norman, 2008)
Rhodes and Hastings wrote “the classical definitions of Systems Engineering are fairly similar in nature, with some variation regarding reference to it as a practice, process, method, or approach” citing Chase (1974):
Systems Engineering is the process of selecting and synthesizing the application of the appropriate scientific and technical knowledge to translate system requirements into system design and subsequently to produce the composite of equipment, skills, and techniques that can be effectively employed as a coherent whole to achieve some stated goal or purpose. (Rhodes & Hastings, 2004)
White described conventional methods writing, “these methods generally assume that the solution is primarily a system of hardware and software, that requirements are fully understood from the start, that the organization in charge of the system solely controls its development and configuration, and that the external environment can be represented by interface specifications for machine interactions” (White B. E., 2010) When initiated at
Bell Labs and later adopted by other organizations attempting to solve problems of unprecedented size and growing complexity, traditional or classical systems engineering was seen as a branch of engineering in the classical sense, that is, as applied only to physical systems, such as a national phone system or space craft.
39
2.3.1 EOS Standards of Practice
EOS standards of practice are documented in a broad array of standards, handbooks, academic literature, and web-resources, focusing on a variety of domains documented in
Table 2-4. Figure 2-5 presents the timeline for the development of EOS standards highlighting the gradual incorporation of SoSM into the TSM baseline set of artifacts.
Table 2-4: EOS Standards of Practice Source Topic TSM SoSM (Sage & Rouse, Handbook Extensive discussion of all facets of of Systems Engineering and engineering of systems. X X Management, 2009) (INCOSE SEH, 2015) Provides over on Systems Systems Engineering Engineering Handbook (SEH) Describes generic life cycle states Detailed presentation of Technical, X X Technical Management, Agreement and Organizational Project-Enabling Processes (ISO/IEC/IEEE Defines discipline and practice of SE 15288:2015(E), 2015) Identifies Hierarchy Levels of Systems X X Validates Definition of SoS (8) (INCOSE SEBoK, 2016) Frequently updated SE knowledge Guide to the SE Body of Foundations of SE; SE and Knowledge (SEBoK) Management; Applications of SE; X X Enabling SE; Related Disciplines Implementation Examples (MITRE, 2014) MITRE Systems Engineering Guide X X Guide Organization SEH (Maier, Architecting Identification of Five Principal Principles for Systems-of- Characteristics; Categories of Systems, 1996); Managerial Control; and, X Identification of Architectural Principles for SoS (Maier, Architecting Reduction to Two Principle Principles for Systems-of- Characteristics of SoS X Systems, 1998) Identifies types of system misclassification (Dahmann & Baldwin, 2008) Identification of Category of X Managerial Control for SoS
40
41
Figure 2-5: Development Timeline for EOS Standards of Practice
2.3.2 Classical Sciences Assumptions Underpinning EOS
A discussion on the assumptions underlying classical sciences is important in that the theory of systems engineering for ordered systems (EOS) is largely based on the theories of Classical Sciences. Ryan wrote that Descartes “described a scientific method, the adherence to which he hoped could provide privileged access to truth” adding “the second rule of analytic reduction, and the third rule of understanding the simplest objects and phenomena first, provided the view of scientific explanation as decomposing the problem into simple parts to be considered individually, which could then be re- assembled to yield an understanding of the integrated whole.” (Ryan, 2008) Rebovich wrote:
Classical systems engineering is a sequential, iterative development process used to produce systems and sub-systems, many of which are of unprecedented technical complication and sophistication. The INCOSE Systems Engineering process is a widely recognized representation of classical systems engineering.
An implicit assumption of classical systems engineering is that all relevant factors are largely under the control of or can be well understood and accounted for by the engineering organization, the system engineer, or the program manager and this is normally reflected in the classical systems engineering mindset, culture, and processes. (Rebovich Jr., The Evolution of Systems Engineering, 2008)
With adoption and application of theories from classical sciences comes acceptance of assumptions that underpin Classical Sciences which were due in part to the simplifications required to solve complex mathematics by hand. Sheard wrote
“engineering and scientists generally considered nonlinear equations to be intractable” adding “scientists from Galileo to Newton had to assume that nonlinearities … were small enough to be neglected.” (Sheard S. , Complex Adaptive Systems in Systems
Engineering and Management, 2009, p. 1289) Regarding assumptions or conditions that are part of classical sciences, von Bertalanffy wrote:
42
The first is that interactions between "parts" be non-existent or weak enough to be neglected for certain research purposes. Only under this condition, can the parts be "worked out," actually, logically, and mathematically, and then be "put together."
The second condition is that the relations describing the behavior of parts be linear; only then is the condition of summativity given, i.e., an equation describing the behavior of the total is of the same form as the equations describing the behavior of the parts. (von Bertalanffy, General System Theory, 1972, p. 18)
Table 2-5 contains Sheard’s four key assumptions for ordered/traditional systems:
Table 2-5: Assumptions in Ordered (Traditional) Systems Engineering
Assumption Implications Newtonian Linear analysis (possibly perturbed) mechanics Avoid three-body and higher-order problems Equilibrium conditions Thermodynamics applies: no energy transfer, highest entropy Gaussian distributions (bell curves) predominate Decomposition Break down system into subsystems and, further, allocate requirements, build, and integrate Pay special attention to interfaces Machine mental models: predict, control, and conform Hierarchical Distribution of complexity among workers, except at the top management Focus on efficiency, predictability, control Waterfall or waterfall-derived development life cycles Specification-based acquisition Separation of management and technical Stable No long-term changes environment Few transients, and those treated as perturbations Specifications assume known and stable environment (Sheard S. , Complex Adaptive Systems in Systems Engineering and Management, 2009, p. 1287)
Each of the four major assumptions of Classical Systems is described in turn presenting first the theoretical basis or justification for the simplification or assumption followed by discussion adoption and potential impacts on EOS.
2.3.2.1 Assumption of Newtonian Mechanics
Bertalanffy wrote “Classic science was concerned with one-way causality or relations between two variables as even the three-body problem permits no closed solution by
43
analytical methods of classical mechanics.” (von Bertalanffy, The History and Status of
General Systems Theory, 1972) Weinberg wrote:
Newton’s Law of Universal Gravitation states that the force of attraction between two bodies is in no way dependent on the present of a third body. As it happens, the solar system has one body (the sun) whose mass is march larger than any other masses, larger, in fact, than the mass of all the other bodies together. Because of this dominant mass, the pair equations not involving the sun’s mass yield forces small enough to be ignored, at least considering the accuracy of the data Newton was trying to explain. (Weinberg, 1975, p. 9)
While ignoring the many objects in the cosmos and instead focusing on the very small number of objects close enough to matter, Newton was able to use classical science to predict the motion of near-by planets. Conversely, classical studies of gases, made of molecules required the opposite assumption – that the interesting measurements were a few average properties of the molecules, rather than the exact properties of any one molecule which could not be measured. (Weinberg, 1975, p. 14)
The Law of Large Numbers assumes that systems that are complex, but yet sufficiently random in their behavior so that they are sufficiently regular to be studied statistically, states that the larger the population, the more likely one is to observe values that are close to the predicted average values. (Weinberg, 1975, p. 14) von Bertalanffy described the use of statistics based on randomness and the Law of Large Numbers as
“unorganized complexity” adding “there loomed the problem of ‘organized complexity’ that is interrelations between many but not infinitely many components.” (von
Bertalanffy, The History and Status of General Systems Theory, 1972)
According to Norman and Kuras, “Traditional systems engineering (TSE) has its foundations in Linear System Theory. Key ideas are proportionality, superpositioning, the existence of invertible functions (i.e., 푥 = 푓 (푓(푥)) and the assumption of repeatability.” They add that, “the practice of TSE is the application of a series of linear
44
transformations” where “a hallmark of the process is the ability to justify everything built in terms of the original requirements.” (Norman & Kuras, 2006) Sheard wrote that the assumption of Newtonian mechanics means, “we assume that the systems behave repeatably and predictably.” (Sheard S. , Complex Adaptive Systems in Systems
Engineering and Management, 2009, p. 1287)
2.3.2.2 Assumption of Decomposition
Descartes’ described his philosophy based on four precepts. His second precept, analytic reduction, was to “divide each of the difficulties under examination into as many parts as possible, and as might be necessary or its adequate solution.” (Ryan, 2008)
INCOSE wrote that, “the best way to understand a complicated system is to break in down into parts recursively until the parts are so simple that we understand them” warning that “this approach does not help us to understand a complex system, because the emergent properties that we really care about disappear when we examine the parts in isolation.” (INCOSE SEH, 2015, p. 9) Sheard noted the limitation to this approach stating, “Decomposition necessarily reduces emphasis on the aspects of the system that cannot be broken into small pieces – how the system becomes a whole that is more than the sum of its parts.” (Sheard S. , Complex Adaptive Systems in Systems Engineering and Management, 2009, p. 1288)
DeRosa stated, “TSE is essentially a reductionist process, proceeding logically from requirements to delivery of a system.” (DeRosa, Forward, 2011, p. 2) Sage and Rouse characterized TSE as, “developing processes for systems engineering that allow us to decompose the engineering of a large system into smaller subsystem engineering issues, engineer the subsystems, and then build the complete system as an integrated collection
45
of these subsystems.” (Sage & Rouse, An Introduction to Systems Engineering and
Systems Management, 2009, p. 11) Sheard described decomposition as a methodology to reduce complexity of systems:
Systems engineering presumes, even dictates, that the complexity of a system solution can be reduced by decomposition. We first investigate the requirements for the system. Then we create a system architecture that will satisfy the requirements of the system: in doing so, we define subsystems and how they interact. Then we allocate the system-level requirements to various subsystems. We are then ready to repeat the process at the subsystem level. This process is called decomposition. (Sheard S. , Complex Adaptive Systems in Systems Engineering and Management, 2009, p. 1288)
Sage and Rouse note a limitation to this approach stating that, “Decomposition necessarily reduces emphasis on the aspects of the system that cannot be broken into small pieces – how the system becomes a whole that is more than the sum of its parts.”
(Sage & Rouse, An Introduction to Systems Engineering and Systems Management,
2009) Gilbert and Yearworth wrote:
Systems Engineering development projects often fail to meet delivery expectations in terms of timescales and cost. Project plans, which set cost and deadline expectations, are produced and monitored within a reductionist paradigm, incorporating a deterministic view of cause and effect. This assumes that the cumulative activities and their corresponding durations that comprise the developed solution can be known in advance, and that monitoring and management intervention can ensure satisfactory delivery of an adequate solution, through implementation of this plan. (Gilbert & Yearworth, 2016)
Senge wrote that, “from a very early age, we are taught to break apart problems, to fragment the world. This apparently makes complex tasks and subjects more manageable” but also warned that, “we pay a hidden, enormous price.” (Senge, 2000)
Senge describes limits of decomposition:
We can no longer see the consequents of our actions; we lose our intrinsic sense of connection to the larger whole. When we then try to “see the big picture,” we try to reassemble the fragments in our minds, to list and organize all the pieces. But, as physicist David Bohm says, the task is futile – similar to trying to reassemble the
46
fragments of a broken mirror to see a true reflection. Thus, after a while we give up trying to see the whole together. (Senge, 2000)
2.3.2.3 Assumption of Hierarchical Management
Descartes’ described his philosophy based on four precepts. His third precept, understanding the simplest objects and phenomena first, was to “conduct my thoughts in such order that, by commencing with objects the simplest and easiest to know, I might ascend by little and little, and, as it were, step by step, to the knowledge of the more complex; assigning in thought a certain order even to those objects which in their own nature do not stand in a relation of antecedence and sequence.” (Ryan, 2008)
In the Handbook of Systems Engineering and Management, Shenhar and Sauser wrote, “a simple way to define various levels of complexity is to use a hierarchical framework of systems and subsystems.” (Shenhar & Sauser, 2009, p. 126) Simon wrote:
By a hierarchic system, or hierarchy, I mean a system that is composed of interrelated subsystems, each of the latter being, in turn, hierarchic in structure until we reach some lowest level of elementary subsystem. (Simon, 1962)
2.3.2.4 Assumption of Stable Environment
Karl Deutsch described the assumptions of classical sciences and mechanism writing:
The whole was completely equal to the sum of its parts; which could be run in reverse; which would behave in exactly identical fashion no matter how often these parts were disassembled and put together again;
Irrespective of the sequence in which the disassembling or reassembling would take place, the parts were never significantly modified by each other, nor by their own past; and, each part once placed in its appropriate position with its appropriate momentum, would stay exactly there and continue to fulfil its completely and uniquely determined function. (Weinberg, 1975, p. 4)
Hybertson and Sheard wrote, “the traditional view of system characteristics is that that are relatively stable and predictable.” (Hybertson & Scheard, 2008) According to
Norman and Kuras, TSE begins with the specification of requirements and continues with
47
allocating desired, known functionality among specific elements of a design; all known a priori and stable over time subject to the ability to justify everything built in terms of the original requirements. (Norman & Kuras, 2006) They conclude that, “the practice of TSE seeks to understand the place of an element within the environment, isolate the element under study from the environment, and then treat the environment as a constant.”
(Norman & Kuras, 2006)
2.3.3 Codifying TSM
Norman and Kuras wrote, “traditional system engineering relies on the making of and the fulfilling of predictions” adding that the characteristics a system must meet to be considered a traditional system and thus a candidate for the application of traditional system engineering (TSE) include:
The specific desired outcome must be known a priori, and it must be clear and unambiguous (implied in this is that the edges of the system, and thus responsibility, are clear and known);
There must be a single, common manager who is able to make decisions about allocating available resources to ensure completion;
Change is introduced and managed centrally; and,
There must be “fungible” resources (that is money, people, time, etc.) which can be applied and reallocated as needed. (Norman & Kuras, 2006)
Thus, in its most basic form, the definition of a traditional system and its complexity is related to the recursive decomposition of a larger entity into smaller, more manageable entities where the system engineer can predict the behavior of the system a priori.
DeRosa states that, “TSE is essentially a reductionist process, proceeding logically from requirements to delivery of a system.” (DeRosa, Forward, 2011, p. 2) Sage and
Rouse characterized TSE as, “developing processes for systems engineering that allow us to decompose the engineering of a large system into smaller subsystem engineering
48
issues, engineer the subsystems, and then build the complete system as an integrated collection of these subsystems.” (Sage & Rouse, An Introduction to Systems Engineering and Systems Management, 2009, p. 11) INCOSE defined a system as:
A system is a construct or collection of different elements that together produce results not obtainable by the elements alone. The elements, or parts, can include people, hardware, software, facilities, policies, and documents; that is, all things required to produce systems-level results. The results include system level qualities, properties, characteristics, functions, behavior, and performance. The value added by the system as a whole, beyond that contributed independently by the parts, is primarily created by the relationship among the parts; that is, how they are interconnected (INCOSE, 2016)
INCOSE wrote, “While the concepts of a system can generally be traced back to early
Western philosophy and later to science, the concept most familiar to systems engineering is often traced to Ludwig von Bertalanffy in which a system is regarded as a
“whole” consisting of interacting “parts” adding the systems “are man-made, created and utilized to provide products or services in defined environment for the benefit of users and other stakeholders.” (INCOSE SEH, 2015, p. 5) In ISO/IEC/IEEE 15288, Systems and software engineering – system life cycle processes, IEEE defined a system as:
The systems considered in this International Standard are man-made, created and utilized to provide products or services in defined environments for the benefit of users and other stakeholders. These systems may be configured with one or more of the following system elements: hardware, software, data, humans, processes (e.g., processes for providing service to users), procedures (e.g., operator instructions), facilities, materials, and naturally occurring entities. As viewed by the user, they are thought of a products or services. (ISO/IEC/IEEE 15288:2015(E), 2015, p. 11)
The INCOSE SEH augments the IEEE definition by introducing a System of Interest
(SOI) construct which was recursively defined as a set of elements that are not systems, a set of elements that can be systems, a System of Systems (SoS), or an enterprise as follows:
an SOI is a system is, “man-made, created, and utilized to provide products or services in defined environment for the benefit of users and other
49
stakeholders” comprised of “an integrated set of elements, subsystems, or assemblies that accomplish a defined objective”;
an SOI is a system is comprised of system elements that, “can be systems on their own merit”;
an SOI a system of systems (SoS), “whose elements are managerially and/or operationally independent systems”; or,
an enterprise which is a, “purposeful combination (e.g., a network) of interdependent resources (e.g., people, processes, organization, supporting technologies, and funding) that interact with each other to coordinate functions, share information allocate funding, create workflows, and make decisions and their environment(s) to achieve business and operational goals through a complex web of interactions distributed across geography and time. (INCOSE SEH, 2015, pp. 5, 7, 8, 176)
2.3.4 Codifying SoSM
Sage and Cuppan challenged TSE in 2001 to, “reconsider our canonical approach to engineering and management of SoS” (Sage & Cuppan, 2001) which evolved into
Systems of systems engineering (SoSE) with the objective to address “shortcoming in the ability to deal with difficulties generated by increasingly complex and interrelated systems of systems.” (Gorod, Sauser, & Broadman, System-of-Systems Engineering
Management: A Review of Modern History and a Path Forward, 2008)
Maier wrote, “While the phrase “system-of-systems” is commonly seen, there is less agreement on what they are, how they may be distinguished from ‘conventional’ systems, or how their development differs from other systems.” (Maier, Architecting Principles for
Systems-of-Systems, 1998)
Maier presented the concept that a system of systems (SoS) exhibits five characteristics: operational and managerial independence, geographic distribution, emergent behavior, and evolutionary development. (Maier, Architecting Principles for
Systems-of-Systems, 1996; Maier, Architecting Principles for Systems-of-Systems, 1998)
50
Maier wrote that a system that does not meet the operational and managerial independence of their components is not to be considered a system-of-systems,
“regardless of the complexity or geographic distribution of its components.” (Maier,
Architecting Principles for Systems-of-Systems, 1998)
Sage and Biemer defined an SoS as, “a large-scale, complex system, involving a combination of technologies, humans, and organizations, and consisting of components which are systems themselves, achieving a unique end-state by providing synergistic capability from its component systems, and exhibiting a majority of the following characteristics: operational and managerial independence, geographic distribution, emergent behavior, evolutionary development, self-organization, and adaptation.” (Sage
& Biemer, 2007)
INCOSE wrote that a SoS is a system, “whose elements are managerially and/or operationally independent systems” and a, “SoS usually exhibits complex behaviors” that may be attributed to the existence of the five characteristics adding that for complex systems, “interactions between the parts exhibit self-organization, where local interactions give rise to novel, nonlocal, emergent patterns.” (INCOSE SEH, 2015, p. 9)
Recognizing that the body of SE knowledge continues to evolve, Version 4 of the
INCOSE SE Handbook introduces system of systems using Maier’s five characteristics of operational independence, managerial independence, geographic distribution, emergent behavior, and evolutional development. (INCOSE SEH, 2015, p. 8)
In describing the difference between TSE and SoSE, Keating, Padilla, and Adams argue that it is a mistake to simply assume that SoSE is an extrapolation of TSE or that
51
SoSE is so separate and distinct from TSE that once SoSE applies there is no value in
TSE. (Keating, Padilla, & Adams, 2008)
INCOSE states that, “the independence of constituent systems in an SoS is the source of a number of technical issues facing SE of SoS” adding, “the fact that a constituent system may continue to change independently of the SoS, along with the interdependencies between that constituent system and other constituent systems, adds to the complexity of the SoS.” (INCOSE SEH, 2015, p. 10)
Keating et al., state that, “Engineers of future complex systems face an emerging challenge of how to address problems associated with integration of multiple complex systems” with the realization that "Complex systems that have been conceived, developed, and deployed as stand-alone systems to address a singular problem can no longer be viewed as operating in isolation.” (Keating, et al., 2003) Keating et al., continued “SoSE represents an evolution of TSE, not a radical departure” adding “TSE approaches should be incorporated in SoSE efforts as appropriate”; however, “SoSE must be developed to address the shortcoming of TSE in addressing increasingly complex systems problems.” (Keating, et al., 2003)
INCOSE states that, “one of the objectives of the SE process is to minimize undesirable consequences” which “can be accomplished through the inclusion of and contributions from experts across relevant disciplines.” (INCOSE SEH, 2015, p. 12) This
‘call for experts’ aligns with the Cynefin definition of the ‘Complicated’ region where cause and effect requires special investigation or inclusion of expert knowledge. In systems where the relationship between cause and effect requires analysis, special study or the application of expert knowledge, SoSE is the preferred SE domain.
52
According to Keating et al., limitations of TSE include the inability to address high levels of ambiguity and uncertainty, the practice of placing system context in the background, and attempting to deliver complete solutions when system demands may be better met by partial system solutions deployed in an iterative fashion. (Keating, et al.,
2003) NASA’s SE handbook describes the relationship between complexity and requirements, “As system complexity grows, the potential for conflicts between requirements increases.” (NASA, 2009, p. 246)
Keating et al., described the limitations with TSE:
Traditional systems engineering has not been developed to address high levels of
ambiguity and uncertainty in complex systems problems;
Although traditional systems engineering does not completely ignore contextual
influences on system problem formulation, analysis, and resolution, it places
context in the background - Context, the circumstances and conditions within
which a complex systems problem is embedded, can constrain and overshadow
technical analysis in determining system solution success; and,
Demand for deploying complex systems that may offer incomplete solutions. This
is contrary to traditional thinking rooted in a linear pattern of concept. (Keating, et
al., 2003)
This research now branches from EOS based on adoption of core classical sciences precepts to Engineering Unordered Systems (EUOS) that is broadly based on the theories from systems sciences and complexity theory.
53
2.4 Engineering Un-Ordered Systems (EUOS)
In the Handbook of Systems Engineering, Sheard wrote that systems may be grouped into two primary groupings: an ordered/traditional system or a complex/non-traditional system. (Sheard S. , Complex Adaptive Systems in Systems Engineering and
Management, 2009, p. 1284) The distinction of ‘un-ordered’ versus ‘ordered’ systems was derived from Kurtz and Snowden who wrote:
To avoid much repetition of the longer terms “directed order” and “emergent order,” we call emergent order “un-order.” Un-order is not the lack of order, but a different kind of order, one not often considered but just as legitimate in its own way. Here we deliberately use the prefix “un-” not in its standard sense as “opposite of” but in the less common sense of conveying a paradox, connoting two things that are different but in another sense the same. Thus, by our use of the term “un-order,” we challenge the assumption that any order not directed or designed is invalid or unimportant.
In the ordered domain we focus on efficiency because the nature of systems is such that they are amenable to reductionist approaches to problem solving; the whole is the sum of the parts, and we achieve optimization of the system by optimization of the parts. In the domain of un-order {discussed next} the whole is never the sum of the parts; every intervention is also a diagnostic, and every diagnostic an intervention; any act changes the nature of the system. As a result, we have to allow a degree of sub-optimal behavior of each of the components if the whole is to be optimized. (Kurtz & Snowden, 2003)
The definition of “system” and its subsequent use in this research is derivative from the writings of Ludwig von Bertalanffy – specifically his description of the issues associated with defining and describing a “system” and the need for a systems approach:
Compared to the analytical procedure of classical science with resolution into components elements and one-way or linear causality as basic category, the investigation of organized wholes of many variables requires new categories of interaction, transaction organization, teleology, etc., with many problems arising for epistemology, mathematical models, and techniques. (von Bertalanffy, The History and Status of General Systems Theory, 1972)
White wrote “complex systems engineering deals with challenging system environments where SE’s simplifying assumptions do not hold.” (White B. E., 2010)
Complexity is not absence of order – rather it is a different form of order, of un-order, or
54
emergent order. While ordered systems are designed, and order is constructed top-down, un-ordered systems are characterized by un-planned order emerging from agents and sub- systems to the system as a whole. (Dahlberg, 2015)
Review of literature identified exemplars where perceived need for new or enhanced
(i.e., non-traditional) SE methodology was accomplished by comparison or contrast to classical, transitional system engineering methods (TSM), generally documenting an assumption, limit, or constraint that prohibited TSM application in that specific example.
Norman and Kuras, describing a specific MITRE program, provide this example:
Using the current instantiation of the Air and Space Operations Center (AOC2), and the desired evolution of it, the AOC2 is shown to be best thought of as a complex system. Complex Systems are alive and constantly changing. They respond and interact with their environments – each causing impact on (and inspiring change in) the other. We make the case that a traditional systems engineering (TSE) approach does not scale to the AOC2; consequently, we don’t believe TSE scales to the “enterprise.” (Norman & Kuras, 2006, p. 1)
2.4.1 EUOS Standards of Practice
Table 2-6: EUOS Standards of Practice Source Topic TSM SoSM ESM CSM (Oliver, Kelliher, & Describes combining text Keegan, Jr., 1997) descriptions and Engineering Complex modeling to analyze and X Systems describe large or small complex systems (Bar-yam, 2003) Mathematical treatment of Dynamics of Complex structure, dynamics, Systems evolution, development and X quantitative complexity that apply to all complex systems (DeRosa, Grisogono, Enterprise systems; Ryan, & Norman, 2008) Classes of Problems for A Research Agenda for Which Complex Systems X X the Engineering of are Required Complex Systems
55
Source Topic TSM SoSM ESM CSM (Sheard & Mostashari, Systems of systems Principles of Complex compared to complex X X Systems for Systems systems with examples Engineering, 2009) (Sage & Rouse, Extensive discussion of all Handbook of Systems facets of engineering of X X X X Engineering and systems. Management, 2009) (Rebovich Jr & White, Extensive discussion of all 2011) faces of engineering Enterprise Systems enterprise systems X Engineering Presentation of Case Studies (Gorod, White, Ireland, Extensive history of SoSM, Gandhi, & Sauser, Case ESM, and CSM Studies in System of Presentation of Case Systems, Enterprise Studies X X X Systems, and Complex Systems Engineering, 2015) (Gilbert & Yearworth, How does complexity in the 2016) Complexity in a organization affect the Systems Engineering ability of Systems X Organization: An Engineers to meet delivery Empirical Case Study expectations in terms of cost and time?
2.4.2 Codifying ESM
According to Sitton and Reich “although there are technical papers describing such complex adaptive systems as well as some early papers contributing to the theory of systems engineering of enterprises, there is no generally accepted theory or set of best practices on this topic.” (Sitton & Reich, 2015) Rebovich and White provide the most comprehensive treatment for the engineering the enterprise in “Enterprise System
Engineering: Advanced in the Theory and Practice.” (Rebovich Jr & White, 2011)
INCOSE also introduces enterprise system engineering (ESE) in Para 8.5, as “Enterprise
SE is the application of SE principles, concepts, and methods to the planning, design, improvement, and operations of an enterprise” adding “enterprise SE is an emerging
56
discipline that focuses on frameworks, tools, and problem-solving approaches for dealing with the inherent complexities of the enterprise.” (INCOSE SEH, 2015, p. 176)
DeRosa described key elements of ESE as: development through adaption, strategic technical planning, enterprise governance, and ESE processes. (DeRosa, Introduction,
2011, p. 8) While a progeny of both TSE and SoSE, ESE is applicable when there is concurrent, substantial technical and mission changes.
DeRosa wrote “Recognizing the alignment between information-intensive networks in man-made enterprise and complex ecosystems in the natural world, the theoretical basis of ESE is a combination of complex systems where networks were generally open systems with porous boundaries and dynamic and unpredictable natural systems.”
(DeRosa, Forward, 2011, p. 10) DeRosa, Rebovich, and Swarz wrote “Ackoff has characterized an enterprise as a “purposeful system” composed of agents who choose both their goals and the means for accomplishing those goals” adding “ESE must account for the concerns, interests, and objectives of these agents.” (DeRosa, Rebovich Jr., &
Swarz, An Enterprise System Engineering Model, 2006) Kurtz and Snowden identified three important contextual characteristics that make it difficult to simulate human activity using agent-based computer models:
Humans are not limited to one identity – In a human complex system, an agent is anything that has identify, and we constantly flex our identities both individually and collectively. Individually we can be a parent, sibling, spouse, or child and will behave differently depending on the context. Accordingly, it is not always possible to know which unit of analysis we are working with.
Humans are not limited to acting in accordance with predetermined rules – We are able to impose structure on our interactions (or disrupt it) as a result of collective agreement or individual acts of free will. We are capable of shifting a system from complexity to order and maintaining it there in such a way that it becomes predictable. As a result, questions of intentionality play a large role in human patterns of complexity. It is difficult to simulate true free will and complex intentionality within a rule-based simulation.
57
Humans are not limited to acting on local patterns – People have a high capacity for awareness of large-scale patterns because of their ability to communicate abstract concepts through language, and more recently, because of the social and technological infrastructure that enables them to respond immediately to events half a world away. This means that to simulate human interaction, all scales of awareness must be considered simultaneously rather than choosing once circle of influence for each agent. (Kurtz & Snowden, 2003)
Elgass et al., wrote that “enterprises are not generally created from a rigorously planned framework to immediately meet a newly identified need; rather, they evolve”
(Elgass, et al., 2011) In contrast to TSE or SoSE, ESE is “exploratory and experimental rather than preplanned and execution-oriented.” (DeRosa, Introduction, 2011, p. 8)
MITRE wrote “When a system is bounded with relatively static, well-understood requirements, the classical methods of systems engineering are applicable and powerful” adding “At the other end of the spectrum, when systems are networked and each is individually reacting to technology and mission changes, the environment for any given system becomes essentially unpredictable.” (MITRE, 2014, p. 37) Rebovich wrote that,
“when networked systems are individually adapting to both technology and mission changes, then the environment for any given system or individual becomes essentially unpredictable” adding “the combination of large-scale interdependencies and unpredictability creates an environment that is fundamentally different from that of the system or system of systems (SoS).” (Rebovich Jr., Systems Thinking for the Enterprise,
2011, p. 33) The distinction is that SoSE focuses on technology changes while ESE incorporates environments with rapid mission changes as well.
Rebovich concluded that “the combination of large-scale interdependencies and unpredictability creates an environment that is fundamentally different from that of the system or system of systems (SoS)” adding “As a result, systems engineering success expands to encompass not only success of an individual system or SoS, but also the
58
network of constantly changing systems.” (Rebovich Jr., Systems Thinking for the
Enterprise, 2011, p. 33) Sitton and Reich wrote that the “enterprise operational requirements definition process is much more complex because it usually involves understanding the needs and operational tasks of various users that operate in different domains, uses different terms and expressions and specialize in different worlds of content.” (Sitton & Reich, 2015)
2.4.3 Codifying CSM
Sheard and Mostashari defined complex systems as, “systems that do not have a centralizing authority and are not designed from a known specification, but instead involve disparate stakeholders creating systems that are functional for other purposes and are only brought together in the complex system because the individual “agents” of the system see such cooperation as being beneficial for them.” (Sheard & Mostashari,
Principles of Complex Systems for Systems Engineering, 2009) Sheard and Mostashari presented the following system characteristics for assessing if a particular system was complex:
Autonomous interacting parts (agents)
o Fuzzy boundaries
Self-organization
o Energy, in and out
Display emergent macro-level behaviour
o Nonlinearity
o Non-hierarchy and central authority
o Various scales
59
Adapt to surrounding (environment)
o Become more complex with time; increasingly specialized
Elements change in response to pressures from neighboring elements (Sheard &
Mostashari, Principles of Complex Systems for Systems Engineering, 2009)
Holland wrote “A complex adaptive system has no single governing equation, or rule, that controls the system” adding “Complex adaptive systems also exhibit an aggregate behavior that is not simply derived from the actions of the parts.” (Holland, 1992) In this research CAS is associated with EUOS and may include either ESM and/or CSM.
2.5 Cynefin sense-making Framework
This section provides describes sense-making, insight into the development of the
Cynefin framework and describes the domains of complexity contained within the framework. Van Beurden, Kia, Zask, Dietrich and Rose wrote “the Cynefin Framework, especially when used as a sense-making tool, can help practitioners understand the complexity of issues, identify appropriate strategies, and avoid the pitfalls of applying reductionist approaches to complex situations.” (Van Beurden, Kia, Zask, Dietrich, &
Rose, 2011)
2.5.1 Introduction to Sense-Making
Kurtz and Snowden wrote that “humans use patterns to order the world and make sense of things in complex situations” adding that “patterns are something we actively, not passively create.” (Kurtz & Snowden, 2003) Checkland wrote “given the complexity of any situation in human affairs, there will be a huge number of human activity system models which could be built” adding “the first choice to be made is of which ones are likely to be most relevant.” (Checkland, 2000) While sense-making has been described as
60
“placing stimuli into some kind of framework” generally referring to a “frame of reference” Louis proposed that sense-making be viewed as a “thinking process that uses retrospective accounts to explain surprises.” (Louis, 1980)
2.5.2 History of the Cynefin Framework
Two years before the Cynefin framework was introduced, Snowden published an article where he described a “landscape of management” as “new simplicity into acts of decision making and intervention design in organizations” titled “Multi-ontology sense making: a new simplicity in decision making.” (Snowden D. J., Multi-ontology sense making: a new simplicity in decision making, 2005) Snowden described multi-ontology sense making by contrasting the nature of systems (ontology) with the nature of way we know things (epistemology) as shown in Figure 2-6.
Figure 2-6: Snowden's Landscape of Management Provides Insight into his Initial Research Intertwining Ontology and Epistemology (Snowden D. J., Multi-ontology sense making: a new simplicity in decision making, 2005)
Research continued into what became the Cynefin framework in areas of knowledge management, cultural change and community dynamics by Kurtz and Snowden who conducted a program of disruptive action research using methods of narrative and
61
complexity theory to address critical business issues. They “began by questioning the basic assumptions that pervade the practice and to a lesser degree the theory of decision making and policy formulation in organizations” including assumptions of order, rational choice, and intentional capability to research what happens in decision theory when those assumptions are relaxed. (Kurtz & Snowden, 2003)
Kurtz and Snowden incorporated complexity science into their research of management science focusing on the concepts of ‘emergent order’ and ‘awareness of emergent order’. They wrote that “a considerable amount of research and some early practice is taking place using complex system principles, mainly using computing power to simulate natural phenomena through agent-based models” adding “we believe that such tools are valuable in certain contexts, but are of more limited applicability when it comes to managing people and knowledge.” (Kurtz & Snowden, 2003)
Figure 2-7 shows the 2003 version of the Cynefin framework with the fifth domain, disorder, included in the center but not labeled. Nilsen wrote “a framework usually denotes a structure, overview, outline, system or plan consisting of various descriptive categories, e.g. concepts, constructs or variables, and the relations between them that are presumed to account for a phenomenon” adding “frameworks do not provide explanations; they only describe empirical phenomena by fitting them in a set of categories.” (Nilsen, 2015) Kurtz and Snowden wrote potential users of the framework should “consider Cynefin a sense-making framework, which means that its value is not so much in logical arguments or empirical verifications as in its effect on the sense-making and decision-making capabilities of those who use it.” (Kurtz & Snowden, 2003)
62
Figure 2-7: 2003 Version of the Cynefin Framework by Kurtz and Snowden (Kurtz & Snowden, 2003)
Leveraging the research of Kurtz and Snowden applying sense-making on change and organizational learning, Snowden and Boone applied a complexity science perspective to assist business leaders determine the “prevailing operative context so they can make appropriate choices” where “each domain requires different actions.” (Snowden &
Boone, A Leader's Framework for Decision Making, 2007) Snowden wrote about developing “a whole new concept of research in services, modelled on medical science, in which concepts and practice co-evolved.” (Snowden D. , 2011) The 2007 version of the Cynefin framework describes a typology of operational environments, called Class of
System Problem (COSP) in this research, of increasing complexity structured to help leaders determine the prevailing operative context from the wide variety of situations in which they must lead and make decisions. Shown in Figure 2-8, the 2007 version of the
Cynefin framework helps leaders and decision makers sense which context to enable better decisions and “avoid the problems that arise when their preferred management
63
style causes them to make mistakes.” (Snowden & Boone, A Leader's Framework for
Decision Making, 2007) Snowden and Boone wrote:
The {Cynefin} framework sorts the issues facing leaders into five contexts defined by the nature of the relationship between cause and effect. Four of these – simple, complicated, complex, and chaotic – require leaders to diagnose situations and to act in contextually appropriate ways. The fifth – disorder – applies when it is unclear which of the other four contexts is predominant. Using the Cynefin framework can help executives sense which context they are in so that they can not only make better decisions but also avoid the problems that arise when their preferred management style causes them to make mistakes. (Snowden & Boone, A Leader's Framework for Decision Making, 2007)
Snowden and Boone described the Cynefin framework as a “leader’s framework for decision making” specifically to “allow executives to see things from new viewpoints, assimilate complex concepts, and address real-work problems and opportunities” in a
2007 article published in the Harvard Business Review. (Snowden & Boone, A Leader's
Framework for Decision Making, 2007) The Snowden and Boone version of the framework, shown in Figure 2-8, contains the five complexity regions and reinstates the order/un-order ontology first introduced in the Snowden’s leadership framework. (see
Figure 2-6, above)
Figure 2-8: 2007 Version of the Cynefin Framework by Snowden and Boone (Snowden & Boone, A Leader's Framework for Decision Making, 2007)
64
Close inspection of the Figure 2-7: 2003 Version of the Cynefin Framework and
Figure 2-8: 2007 Version of the Cynefin Framework shows that the nomenclature of the complexity domains was modified between 2005 and 2007.
Kurtz and Snowden wrote that the framework originated in knowledge management as a means of distinguishing between formal and informal communities interacting with both structured processes and uncertain conditions. (Kurtz & Snowden, 2003) In naming the framework Cynefin, Kurtz and Snowden wrote:
The name Cynefin is a Welsh word whose literal translation into English as habitat or place fails to do it justice. It is more properly understood as the place of our multiple affiliations, the sense that we all, individually and collectively, have many roots, cultural, religious, geographical, tribal, and so forth. We can never be fully aware of the nature of those affiliations, but they profoundly influence what we are. The name seeks to remind us that all human interactions are strongly influenced and frequently determined by the patterns of our multiple experiences, both through the direct influence of personal experience and through collective experience expressed as stories. (Kurtz & Snowden, 2003)
2.5.3 Cynefin Complexity Domains
Kurtz and Snowden described the Cynefin framework is a “phenomenological framework, meaning that what we care most about is how people perceive and make sense of situation in order to make decisions” adding “perception and sense-making are fundamentally different in order versus un-order.” (Kurtz & Snowden, 2003)
Kurtz and Snowden wrote “the central domain of disorder”, described in Section
2.5.3.5 below, “is critical to understanding conflict among decision makers looking at the same situation from different points of view.” (Kurtz & Snowden, 2003) Regarding the significant distinction between order and un-order they wrote:
Un-order is not the lack of order, but a different kind of order, one not often considered but just as legitimate in its own way. Here we deliberately use the prefix “un-” not in its standard sense as “opposite of” but in the less common sense of conveying a paradox, connoting two things that are different but in another sense the
65
same. Thus, by our use of the term “un-order,” we challenge the assumption that any order not directed or designed is invalid or unimportant. (Kurtz & Snowden, 2003)
Shown in Figure 2-9, Kurtz and Snowden wrote that in addition to the central domain of disorder, “the framework actually has two larger domains, each with two smaller domains inside.” (Kurtz & Snowden, 2003) This research asserts that within this larger
Ordered domain, shown on the right half of Figure 2-9, the four foundational assumptions underpinning systems engineering – Newtonian mechanics, analysis by decomposition, hierarchical management, and stable environment – apply thus allowing for determination or the ability to relate cause and effect.
Figure 2-9: Domains of Un-Ordered, Ordered and Disorder
Shown in Figure 2-10, Kurtz and Snowden continued “in the right-side domain of order, the most important boundary for sense-making is that between what we can use immediately (what is known) and what we need to spend time and energy finding out about (what is knowable). (Kurtz & Snowden, 2003) Brougham wrote “in an ordered system … behavior is highly predictable and the causality is either obvious from experience or can be determined with the right expertise.” (Brougham, 2015)
66
Figure 2-10: Ordered domains include the Simple domain and the Complicated domain
Shown in Figure 2-11, Kurtz and Snowden wrote “in the left-side domain of Un- order, distinctions of knowability are less important than distinctions of interaction; that is, distinctions between what we can pattern (what is complex) and what we need to stabilize in order for patterns to emerge (what is chaotic).” (Kurtz & Snowden, 2003)
Figure 2-11: Un-Ordered domains include the Complex domain and the Chaotic domain
In summary, Kurtz and Snowden wrote “the Cynefin framework is based on three ontological states (order, complexity, and chaos) and a variety of epistemological options in all three of those states” adding “interweaving of ontology and epistemology appears to be an essential aspect of human sense-making in practice.” (Kurtz & Snowden, 2003)
Each of the five domains is described below.
67
2.5.3.1 Simple Domain – Ordered & Known
Snowden and Boone wrote that this domain is characterized by “stability and clear cause-and-effect relationships that are easily discernible by everyone.” Originally calling this “the domain of best practice” they wrote that “the right answer is self-evident and undisputed” and “decisions are unquestioned because all parties share an understanding.”
They wrote that this context “assumes an ordered universe, where cause-and-effect relationships are perceptible, and right answers can be determined based on the facts.”
(Snowden & Boone, A Leader's Framework for Decision Making, 2007) Kurtz and
Snowden wrote that “cause and effect relationships are generally linear, empirical in nature, and not open to dispute” adding “repeatability allows for predictive models to be created, and the objectivity is such that any reasonable person would accept the constraints of best practice.” (Kurtz & Snowden, 2003) Gardner wrote “this is the domain of established fact, we believe we fully comprehend the relationship between pieces of information in this space and are confident we can make predictions based on our understanding.” (Gardner, 2013) Sheard wrote that known situations “have repeatable and predictable relationships between cause and effect” and “lend themselves to imposition of best practices and standard operating procedures.” (Sheard S. , Complex
Adaptive Systems in Systems Engineering and Management, 2009, p. 1314)
2.5.3.2 Complicated Domain – Ordered & Knowable
Snowden and Boone characterized this “the domain of experts” adding “though there is a clear relationship between cause and effect, not everyone can see it.” They wrote that leadership in this context “calls for investigating several options” often requiring “expert diagnosis” from “a team of experts” for investigating several viable options where there
68
is more than one right answer. (Snowden & Boone, A Leader's Framework for Decision
Making, 2007) Kurtz and Snowden wrote that in the absence of time or money to fully understand an environment, this domain requires decision makers to rely on expert opinion. (Kurtz & Snowden, 2003) Like the simple context, the complicated context
“assumes an ordered universe, where cause-and-effect relationships are perceptible, and right answers can be determined based on the facts.” (Snowden & Boone, A Leader's
Framework for Decision Making, 2007) Gardner wrote “in this space we see the formation of formal collaboration between multi disciplines teams consisting of subject matter experts.” (Gardner, 2013)
Validation of the assertion shown in Figure 2-10 was provided by several sources.
Sheard wrote that knowable situations “experience cause and effect that are separated over time and space, and thus should be analysed, usually by reductionist methods.”
(Sheard S. , Complex Adaptive Systems in Systems Engineering and Management, 2009, p. 1314) Van Beurden et al., wrote “in this domain, structured techniques based on reductionist science (e.g. longitudinal studies), are used to produce evidence.” (Van
Beurden, Kia, Zask, Dietrich, & Rose, 2011) Kurtz and Snowden wrote in the knowable domain “we focus on efficiency because the nature of systems is such that they are amenable to reductionist approaches to problem solving; the whole is the sum of the parts, and we achieve optimization of the system by optimization of the parts.” (Kurtz &
Snowden, 2003)
2.5.3.3 Complex Domain – Un-Ordered & Knowable
Comparing complex systems to complicated systems, White et al., wrote:
Complex is more than complicated, a notion that is on the lowest rung of a discrete or even continuous scale of increasing complexity. Many people including engineers
69
and systems engineers use complex and complicated interchangeably, or worse, use complex when they mean only complicated. Complex refers to a range of difficulty that is more, and often much more, than merely complicated. (White, Gandhi, Gorod, Ireland, & Sauser, 2013)
Kurtz and Snowden wrote that “humans use patterns to order the world and make sense of things in complex situations.” (Kurtz & Snowden, 2003) Gardner wrote that the complex domain “is the arena of ambiguity” describing the need for “looking for patterns and/or insights.” (Gardner, 2013) Van Beurden et al., wrote that in the un-ordered domain
“there are cause/effect relationships but their non-linear nature and the multiplicity of agents defy conventional analysis” adding “unpredictable patterns emerge from the mix to be understood only in retrospect.” (Van Beurden, Kia, Zask, Dietrich, & Rose, 2011)
Snowden and Boone amplified the limits of observation writing “in this domain we can understand why things happen only in retrospect.” (Snowden & Boone, A Leader's
Framework for Decision Making, 2007) Kurtz and Snowden wrote:
This is the domain of complexity theory, which studies how patterns emerge through the interaction of many agents. There are cause and effect relationships between the agents, but both the number of agents and the number of relationships defy categorization or analytic techniques. Emergent patterns can be perceived but not predicted; we call this phenomenon retrospective coherence. (Kurtz & Snowden, 2003)
There is a risk of assuming that a pattern, once recognized, may be used for prediction of future outcomes. Van Beurden et al., warn:
Attempts to turn emergent patterns into policy or procedure by top-down ‘installation’ that disregards their context will inevitable be confronted by new emergent patterns, each of which will also be understood only on reflection. So even expert opinion, based on historically stable patterns of meaning, will not sufficiently prepare us to recognize and act on new unexpected patterns. (Van Beurden, Kia, Zask, Dietrich, & Rose, 2011)
70
2.5.3.4 Chaotic Domain – Un-Ordered & Unknowable
Kurtz and Snowden wrote that in the simple, complicated, and complex domains
“there are visible relationships between cause and effect” adding that “in the chaotic domain there are no such perceivable relations and the system is turbulent” warning “we do not have the response time to investigate change.” (Kurtz & Snowden, 2003) Van
Beurden et al., wrote that the chaotic domain is “the turbulent, un-ordered” and “has no visible cause/effect relationships.” (Van Beurden, Kia, Zask, Dietrich, & Rose, 2011)
Gardner wrote that the chaotic domain is a “turbulent/disorganized space where all information is equal” describing each piece of information “is a fragment with no relationship to any other fragment.” (Gardner, 2013)
Snowden and Boone wrote “in a chaotic context, searching for right answers would be pointless: the relationships between cause and effect are impossible to determine because they shift constantly and no manageable patterns exist – only turbulence.”
(Snowden & Boone, A Leader's Framework for Decision Making, 2007)
2.5.3.5 Disorder Domain – Unknowing & Unconcerned
A synonym for unknowing is ignorance while a synonym for unconcerned is apathy.
Tumlinson provides a humorous example of two primary motivations for projects to be in the disorder domain:
Two kids are sitting in a high school auditorium, listening to the principal give the welcoming speech for the year. The principal says, “The two greatest dangers that students face are ignorance and apathy.”
One of the students turns to his friend and asks, “Dude, what's ignorance and apathy?”
The other student, bored and restless and wanting for the speech to end says, "I don't know and I don't care." (Tumlinson, 2017)
71
Snowden and Boone wrote that “the very nature of the fifth context – disorder – makes it particularly difficult to recognise when on it in it.” (Snowden & Boone, A
Leader's Framework for Decision Making, 2007) Ban Beurden et al., describe the disorder domain:
Here, we are undecided about which of the four other domains our situation represents, often because we are not conscious of alternatives. We may have a personalized, ‘one-size-fits-all’, default approach to management, decision-making, and group function that reflects our comfort zone rather than any rational choice. (Van Beurden, Kia, Zask, Dietrich, & Rose, 2011)
2.5.4 Cynefin Summary
Dahlberg wrote “The Cynefin Framework developed by David Snowden offers a useful approach to sense-making by dividing systems and processes into three distinct ontologies: (1) Order, (2) un-order, and (3) chaos” adding “Order and un-order co-exist in reality and are infinitely intertwined. Separation of the ontologies serves only as a sense- making tool at the phenomenological level, as assistance in determining the main characteristics of the situation you find yourself in, thus guiding you towards the most useful managerial and epistemological tools for the given ontology.” (Dahlberg, 2015)
The five domains of complexity in the Cynefin context-sensing framework, sorted by increasing levels of complexity, are in Table 2-7.
Table 2-7: Summary of Cynefin Framework Domain Names and Multi-Ontological Foundations 2007 2004 Cause & Effect Ontology Epistemology Domains Domains Relationship Disorder Disorder Unconcerned Unknowing Not considered Simple Known Ordered Known a priori cause and effect a priori cause and effect that requires expert Complicated Knowable Ordered Knowable knowledge or special investigation
72
2007 2004 Cause & Effect Ontology Epistemology Domains Domains Relationship a posteriori cause and Complex Complex Un-ordered Knowable effect Absence of cause and Chaotic Chaos Un-ordered Un-Knowable effect
2.6 Defining System & Model Complexity
Sillitto wrote “There has long been a divide between systems practitioners concerned with “hard” systems – often involving software and complex technologies – and “soft systems,” concerned with social systems and human understanding of systems and human response to complex situations” adding “Both sets of practice seek an underpinning theory or science of systems. However, the relationship of systems science to systems thinking and systems engineering is uncertain, or at least not widely agreed.” (Sillitto H. ,
2012)
The concept of complexity is so broad that there is no universally accepted definition or set of definitions. There is no SE industry accepted definition of complexity due in part to the existence of multiple theories regarding complexity. In the Handbook of Systems
Engineering and Management, Sheard wrote that, “complexity is the most commonly used name for the realm on the edge between order and chaos.” (Sheard S. , Complex
Adaptive Systems in Systems Engineering and Management, 2009) IEEE 15288 includes complex or complexity fifteen times but does not provide a description nor definition.
INCOSE SEH uses complex or complexity ninety-nine times without a definition.
INCOSE SEBoK defines complexity as “a measure of how difficult it is to understand how a system will behave or to predict the consequences of changing it” adding “it occurs when there is no simple relationship between what an individual element does and what the system as a whole will do.” (INCOSE SEBoK, 2016, p. 93)
73
INCOSE SEH wrote that “a system concept should be regarded as a shared ‘mental representation’ of the actual system” cautioning that “the SE must continually distinguish between systems in the real world and system representations” (e.g., models). (INCOSE
SEH, 2015, p. 5) Therefore, this research will define complexity for both actual systems and system representative models and describe the interrelationship between the two.
The section defines system complexity for systems and models of systems based on accepted SE theory using definitions provided in Section 2.2, Definition of System Used, frequently referring to earlier research and associated results described in 2.1, SE
Theoretical Foundations, and Section 2.5, Cynefin sense-making Framework. Where appropriate, the findings from previous literature research are reproduced to aid in associating the diverse theories and frameworks required to define system complexity. As a system is a combination of people, process, and technology within a defined environment with feedback, system complexity may be developed by considering each of the system elements in turn.
2.6.1 Definition for System Complexity
A system of interest (SOI) is one or more systems (things) that: has an environment; has a boundary; has elements and relationships; contains system elements that may be people, processes, and/or technologies; exhibits behaviors; and has one or more feedback mechanisms as shown in Figure 2-12.
74
Figure 2-12: Representation of System of Interest (SOI)
Recall that Ashby described two system states shown in Table 2-1 for systems with feedback.
Table 2 1: Ashby’s Set of Defined System States (from above) System Condition Meaning States The regulator contains requisite variety to Stable VO ≥ VD – VR control the outcome for a given disturbance The regulator does not contain requisite variety Unstable VO < VD – VR to control the outcome for a given disturbance
2.6.2 Complexity of Technology Elements
Recall from Section 2.1.3.6, Computational Complexity Theory, that Langton described the defined states for computers themselves or as logical universes, within which computers may be embedded shown in Table 2-2.
Table 2-2: Langton & Wolfram’s Defined States for CA that support Universal Computation (from above) Langton’s Wolfram’s Meaning CA States CA States Fixed point Class I Relax to a homogeneous fixed-point Relax to a heterogeneous fixed point or Periodic Class II to short-period behavior
75
Langton’s Wolfram’s Meaning CA States CA States Support complex interactions between localized Complex Class IV structures, often exhibiting long transients Chaotic Class III Relax to chaotic, random behavior
2.6.3 Complexity of Process Elements
IEEE 15288 “provides a process reference model characterized in terms of the process purpose and the process outcomes that result from the successful execution of the activity tasks” adding “It is understood that some users of this International Standard may desire to assess the implemented processes in accordance with ISO/IEC 15504.”
(ISO/IEC/IEEE 15288:2015(E), 2015, p. 90) International standardization of process assessment began in 1991 with development of ISO/IEC 15504, Information technology
– Process assessment. The initial version of 15504 was released in multiple parts in 1998 as an interim standard. Development of the standard incorporated results from empirical studies and was published in five parts from 2003 to 2006. (Rout, Walker, & Dorling,
2017) IEEE 15288 wrote “processes may be characterized by other attributes common to all processes” adding “ISO/IEC 15504-2 identifies common process attributes that characterize six levels of achievement within a measurement framework for process capability. The purpose and outcomes are a statement of the goals of the performance of each process. This statement of goals permits assessment of the effectiveness of the processes in ways other than simple conformity assessment.” (ISO/IEC/IEEE
15288:2015(E), 2015, pp. 15, 19)
Rout, Walker and Dorling noted that ISO/IEC 15504 focused on process capability and to some extent on organizational maturity writing “the standards framework needed to be extended to address other characteristics of processes in addition to process
76
capability” adding “a strategy was adopted to define requirements for the construction of measurement frameworks to address identified characteristics in a generic way, with a measurement framework for process capability included.” (Rout, Walker, & Dorling,
2017) ISO/IEC 15504 was partially replaced by ISO/IEC 33001:2015, Information technology – Process assessment – Concepts and Terminology, as of March 2015. Rout,
Walker and Dorling wrote “ISO/IEC 3300xx allows for process assessment in terms of a process quality characteristic (PQC)” adding “the new PQC concept has a strong analogy with product quality characteristics.” (Rout, Walker, & Dorling, 2017) While ISO/IEC
15504 defined six capability levels (0 to 5), each process attribute, which consists of one or more generic practices, was assessed on a four-point (N-P-L-F) rating scale shown in
Table 2-8.
Table 2-8: Summary of 15504 Process Assessment Ranking Scale PQC Meaning Defined as F Fully achieved >85% - 100% L Largely achieved >50% - 85% P Partially achieved >15% - 50% N Not achieved 0-15%
2.6.4 Complexity of Human Elements
Recall that in Section 2.1.3.7, Complexity in Organizational Theory, above, Towers described workforce competencies shown in Table 2-3.
Table 2 3: Towers Levels of Appropriate Competencies (from above) Practices Work Type Skill Level How to Achieve Best “Assembly Line” Proficiency Training Good Information Fluency Training & Experience Emergent Knowledge Literacy Deliberate Practice Novel Concept Mastery Deliberate Practice (10,000 hrs)
77
2.6.5 Complexity of Environments
Recall that in Section 2.5.4, Cynefin Summary, Snowden described a sense- awareness framework for people to perceive their environment in order to make sense of the situation and make decisions shown in Table 2-7.
Table 2 7: Summary of Cynefin Framework Domain Names and Multi-Ontological Foundations (from above) 2007 2004 Cause & Effect Ontology Epistemology Domains Domains Relationship Disorder Disorder Unconcerned Unknowing Not considered Simple Known Ordered Known a priori cause and effect a priori cause and effect that requires expert Complicated Knowable Ordered Knowable knowledge or special investigation a posteriori cause and Complex Complex Un-ordered Knowable effect Absence of cause and Chaotic Chaos Un-ordered Un-Knowable effect
INCOSE and IEEE’s definition of system included statements that highlighted that notation that the definition of a system is dependent on the frame of reference or perspective of individuals:
Systems may be configured with one or more of the following system elements: hardware, software, data, humans, processes (e.g., processes for providing service to users), procedures (e.g., operator instructions), facilities, materials and naturally occurring entities. As viewed by the user, they are thought of as products or services;
and,
The perception and definition of a particular system, its architecture, and its system elements depend on a stakeholder’s interests and responsibilities. One stakeholder’s system-of-interest can be viewed as a system element in another stakeholder’s system-of-interest. Furthermore, a system-of-interest can be viewed as being part of the environment for another stakeholder’s system-of-interest. (ISO/IEC/IEEE 15288:2015(E), 2015, p. 11)
78
2.6.6 Combining People, Process, Technology and Environment Complexity
Since Ashby, Langton, Wolfram, ISO/IEC 33001:2015 and Workforce Competencies were all designed to measure things that actually exist – none of them considered the case where the existence of the actual thing was unknown or there was no concern expressed for measurement or assessment. Only the Cynefin framework consciously considers
Disorder where the existence of the actual thing may be unknown or there is no concern to measure it. Table 2-9 includes all the actual system measurements developed above.
Analysis of Table 2-9 shows that, despite the broad set of constructs used to define complexity created by different experts to measure different things, there is a remarkable similarity that allows for alignment.
Table 2-9: Summary of Complexity Measurements for System States, Technology, Process, People/Workforce, and Environment System Langton’s Wolfram’s Process Workforce Cynefin States CA States CA States PQC Practices Domain NA NA NA NA NA Disorder Stable Fixed point Class I F Best Known Stable Periodic Class II L Good Knowable Unstable Complex Class IV P Emergent Complex Unstable Chaotic Class III N Novel Chaos States Technology Technology Process People Environment
While the complexity measurements may align, complexity is a categorical attribute where data observations fall into discrete, named value categories which do not allow for mathematical operations. While there is no summation of complexity using communicative, associative or distributive properties, analysis of Table 2-9 does demonstrate that the Cynefin sense-awareness framework is sufficiently robust to describe complexity of an environment sensed by the Program Management and/or
System Engineering and Management (PM/SEM) leadership regardless of the source of
79
that complexity in that environment when acting as a Regulator providing feedback as shown in Figure 2-13.
Figure 2-13: Graphical Summary of Complexity Measurements for System States, Technology, Process, People/Workforce, and Environment
INCOSE SEBoK wrote “complexity is a measure of how difficult it is to understand how a system will behave or to predict the consequences of changing it adding “It can be affected by objective attributes of a system such as by the number, types of, and diversity of system elements and relationships, or by the subjective perceptions of system observers due to their experience, knowledge, training, or other sociopolitical considerations.” (INCOSE SEBoK, 2016, p. 93) Sheard et al., wrote:
Complexity is a characteristic of more than just a technical system being developed. It is often created by the interaction of people, organizations, and the environment that are part of the complex system surrounding the technical system. Complexity results from the diversity, connectivity, interactivity, and adaptivity of a system and its
80
environment. Constant change makes it difficult to define stable goals for a project or system. Technical systems that worked well in the past to solve an environmental problem become obsolete quickly. Intricate networks of evolving cause-effect relationships lead to subtle bugs and surprising dynamics. Unintended consequences can overwhelm or even negate the intended consequences of actions. (Sheard, et al., A Complexity Primer for Systems Engineers, 2016)
Sage and Rouse wrote that many potential difficulties affecting trustworthy systems are problems associated with “organization and management of complexity” rather than
“direct technological concerns.” (Sage & Rouse, An Introduction to Systems Engineering and Systems Management, 2009, p. 13)
While Figure 2-13 provides an interesting conceptual representation on assessing the overall complexity of an actual system based on analysis of potential contributing factors for complexity from each of the system elements, exact knowledge of the actual system is only possible in deterministic, ordered systems. INCOSE wrote that, “the best way to understand a complicated system is to break in down into parts recursively until the parts are so simple that we understand them” warning that “this approach does not help us to understand a complex system, because the emergent properties that we really care about disappear when we examine the parts in isolation.” (INCOSE SEH, 2015, p. 9)
SEMDAM’s application of the Cynefin framework measures the relationship between the observer (e.g., PM/SEM) and the SOI. PM/SEM ability to predict system outcomes a priori or perceive system outcomes a posteriori is dependent on the skill and experience of PM/SEM and therefore subjective. Weinberg wrote:
Systems writers sometimes speak of ‘emergent’ properties of a system, properties that did not exist in the parts but that are found in the whole. Other writers attack this idea, saying that emergent properties are but another name for vital essence. Moreover, they can support their arguments with specific examples of ‘emergent’ properties that turned out to be perfectly predictable. Which is right?
Both are right, but both are in trouble because they speak in absolute terms, as if the ‘emergence’ were ‘stuff’ in the system, rather than a relationship between system and
81
observer. Properties ‘emerge’ for a particular observer when he could not or did not predict their appearance. We can always find cases in which a property will be ‘emergent’ to one observer and ‘predictable’ to another.
Demonstrations that a property could have been predicted have nothing to do with ‘emergence.’ By recognizing emergence as a relationship between the observer and what he observes, we understand that properties will ‘emerge’ when we put together more and more complex systems. (Weinberg, 1975, p. 60)
2.7 Modelling Complexity
Sheard et al., wrote “all science involves abstraction of the complexity of the world into approaches and models that use simplifying assumptions” adding “The best engineering methods take advantage of the simplicity in the models without diverging so far from reality that behavior can no longer be predicted and controlled.” (Sheard, et al.,
A Complexity Primer for Systems Engineers, 2016) Senge wrote “from a very early age, we are taught to break apart problems, to fragment the world. This apparently makes complex tasks and subjects more manageable” but also warns that, “we pay a hidden, enormous price. (Senge, 2000) Senge wrote:
We can no longer see the consequents of our actions; we lose our intrinsic sense of connection to the larger whole. When we then try to “see the big picture,” we try to reassemble the fragments in our minds, to list and organize all the pieces. But, as physicist David Bohm says, the task is futile – similar to trying to reassemble the fragments of a broken mirror to see a true reflection. Thus, after a while we give up trying to see the whole together. (Senge, 2000)
INCOSE wrote of the black box/white box terminology “black box represents an external view of the system (attributes)” adding “white box represents an internal view of the system (attributes and structure of the elements).” (INCOSE SEH, 2015, p. 262) Dietz et al., wrote:
A constructional model (or white-box model) of an enterprise, can always be validated from the actual construction. Contrarily, a functional model (or black-box model) is by its very nature subjective, because function is not a system property but a relationship between the system and a stakeholder. Consequently, every system has
82
(at any moment) one construction, but it may have at least as many functions as there are stakeholders. (Dietz, et al., 2013)
In describing the impact of increased complexity on modeling modern systems,
Piaszczyk wrote “Complexity of modern systems that integrate humans, software, and hardware to address the frequently conflicting needs and constraints makes requirements engineering increasingly difficult to manage” adding “Dealing with this complexity requires a complete revision of approaches and methods of systems engineering to achieve usable, reliable, and cost-effective solutions to the problems that are becoming more and more difficult.” (Piaszczyk, 2011)
2.7.1 Definition of Model Complexity
This research proposes use of Class of System Problem (COSP) to measure Model complexity. Verification of completeness and correctness of the COSP included is provided by inspection of Figure 3-1: The INCOSE Complex Systems Working Group
Use of the Cynefin Framework to Identify Classes of Systems Problems.
The presence of overloaded terms including (e.g., ‘complex’, ‘known’, or ‘simple’) complicate the task of describing SEMDAM. The constructs to describe complexity by
INCOSE’s Complex Systems Working Group and Cynefin’s complexity domains and candidate explanations of COSP are interrelated and are frequently used in this dissertation. As INCOSE COSP and Cynefin domains are synonymous and Cynefin domains are more concise and describe disorder, this research proposes use of the candidate explanations for COSP (italicized) based on the Cynefin complexity domain names (not italicized) and INCOSE COSP in Table 2-10.
83
Table 2-10: SEMDAM Alignment Between INCOSE’s Complexity Working Group, Cynefin’s Typology of Operating Environments and SEMDAM Candidate Explanations for COSP INCOSE Complexity Cynefin Typology of COSP Model Working Group Domains Operating Environments Complexity Not understood or Disorder Disorder expressed Simple and complicated Known or Simple Known systems Massively-complicated Knowable or Complicated Knowable systems Complex systems Complex Complex Chaotic systems Chaos or Chaotic Chaos
This research will standardize on the domain names used in the 2004 Kurtz and
Snowden presentation and by the INCOSE Complex Systems Working Group included in the third column of Table 2-10. Users of SEMDAM need to be aware that SEMDAM does not measure actual system complexity. It measures the PM/SEM understanding of the COSP as demonstrated by regular identify of predictions or identification of trends.
2.7.2 Identification of SEMs
Classification of SEMs involves: researching SE literature normalizing the many divergent representations and definitions of systems and systems engineering in the technical literature; synthesizing the confusing and sometimes conflicting terms used to describe systems into generally acceptable recognizable groups; identifying significant characteristics of each group; and, identifying constant differences. Smith wrote:
There are two basic approaches to classification. The first is typology, which conceptually separates a given set of items multidimensionally… The key characteristic of a typology is that its dimensions represent concepts rather than empirical cases. The dimensions are based on the notion of an ideal type, a mental construct that deliberately accentuates certain characteristics and not necessarily something that is found in empirical reality. (Weber, 1949) As such, typologies create useful heuristics and provide a systematic basis for comparison. Their central drawbacks are categories that are neither exhaustive nor mutually exclusive, are often
84
based on arbitrary or ad hoc criteria, are descriptive rather than explanatory or predictive, and are frequently subject to the problem of reification. (Bailey, 1994)
A second approach to classification is taxonomy. Taxonomies differ from typologies in that they classify items on the basis of empirically observable and measurable characteristics. (Bailey, 1994, p. 6) Although associated more with the biological than the social sciences (Sokal & Sneath, 1964), taxonomic methods–essentially a family of methods generically referred to as cluster analysis–are usefully employed in numerous disciplines that face the need for classification schemes (Lorr, 1983; Mezzich & Solomon, 1980). (Smith K. B., 2002)
Use of a sense-making framework is a form of typological analysis defined by SAGE
Research Methods as:
Typological analysis is a strategy for descriptive qualitative (or quantitative) data analysis whose goal is the development of a set of related but distinct categories within a phenomenon that discriminate across the phenomenon. Typologies are characterized by categorization, but not by hierarchical arrangement; the categories in a typology are related to one another, not subsidiary to one another. (SAGE Research Methods, 2008)
This research incorporated four SEMs which are listed below and described in detail in Chapter 2, Literature Review. SEMs include:
Traditional Systems Method (TSM) – described in Section 2.3.3, Codifying TSM;
System-of-Systems Method (SoSM) – described in Section 2.3.4, Codifying
SoSM;
Enterprise Systems Method (ESM) – described in Section 2.4.2, Codifying ESM;
and,
Complex Systems Method (CSM) – described in Section 2.4.3, Codifying CSM.
Figure 2-14, not drawn to scale, includes SEMs identified in literature research and applied in SEMDAM: NOSEM, TSM, SoSM, ESM, and CSM, and their proposed typology ordered by the ability of the system to address COSP of increasing complexity.
85
Figure 2-14: Proposed Typology of SEMs in Relation to Complexity
The approach of considering one or more SEMs based on increasing complexity is not new. Sheard et al., wrote that as a system’s complexity increases, “the risk associated with using simpler methods and simplifying assumptions also increases, and more advanced techniques may be needed” adding “tools and techniques apply differently to systems on a spectrum of increasing complexity.” (Sheard, et al., A Complexity Primer for Systems Engineers, 2016)
2.7.3 Alignment of COSP and SEMs
To define system complexity that correctly identifies the class of system problem requires definitions for system and complexity suitable across engineering of ordered systems and un-ordered systems including each of the SEMs presented earlier. Applying the Cynefin sense-awareness framework with complexity domains associated with observation of cause and effect, SEMDAM provides a recommendation for a COSP appropriate SEM based on measuring statistically significant association for a priori
86
prediction of system outcomes or a posteriori perception of system response based on the following alignment:
No evidence of considering a SEM aligns with Disorder domain;
Known – where the relationship between cause and effect is obvious to all –
aligns with TSM (i.e., assumptions of Newtonian mechanics, decomposition,
hierarchical management, and stable environment apply);
Knowable – where the relationship between cause and effect is knowable with
expert knowledge or special investigation – aligns with SoSM (e.g., see
INCOSE’s approach to reduce complexity in SE for SoSs, “can be accomplished
through the inclusion of contributions from experts across relevant disciplines
coordinated by the SE”); (INCOSE SEH, 2015, p. 12)
Complex – where the relationship between cause and effect can only be perceived
in retrospect – aligns with ESM (i.e., emergence, differentiation, selectionism,
adaptation, self-organization, homoeostasis, and loose coupling apply; Newtonian
mechanics, decomposition, hierarchical management, and stable environment do
not apply); and,
Chaos – where there is no relationship between cause and effect – aligns with
CSM (egocentric agents, autonomy, multi-scalarity, anisotropy, edge of chaos;
autopoiesis, homoeostasis, Newtonian mechanics, decomposition, hierarchical
management, and stable environment do not apply).
Recalling the assertion expressed by DeRosa et al., that “there are classes of problems that require complex systems to deal with them” (DeRosa, Grisogono, Ryan, & Norman,
2008), Table 2-11 presents the proposed alignment of COSP and complexity appropriate
87
SEMs that addresses Ashby’s requisite variety theory as applied to engineering of systems of increasing complexity. Using inferred COSP as input, SEMDAM looks up the associated complexity appropriate SEM from Table 2-11.
Table 2-11: Proposed Alignment Between Inferred COSP and Complexity Appropriate SEM Recommended Inferred COSP SEM Disorder NOSEM Known TSM Knowable SoSM Complex ESM Chaos CSM
Recall that Snowden’s original research into sense-making started with a multi- ontological perspective created by contrasting the nature of systems (ontology) with the nature of way we know things (epistemology). While the names used to describe the five domains of complexity varied slightly, the fundamental concept that our understanding of complexity is a combination of the nature of systems and nature of the way we know about the nature of systems did not change. This duality is at the heart of the debate between an objective definition of complexity (i.e., something that is an attribute of a system) and a subjective definition of complexity (i.e., what is complex for one person may not be for another). Gorod, Sauser, and Boardman wrote:
While the scope of engineering and managing systems has changed dramatically and become a significant challenge in our ability to achieve success, fundamental to understanding the context of any system is the necessity to distinguish between the system type and its strategic intent, as well as its systems engineering and managerial problems. Therefore, no single approach can solve these emerging problems, and thus no one strategy is best for all projects. (Gorod, Sauser, & Broadman, System-of- Systems Engineering Management: A Review of Modern History and a Path Forward, 2008)
88
2.8 Literature Review Summary
This section provided an in-depth treatment of underlying theories and then provided a definition of system used for this research. Based on classical sciences, system sciences and complexity, this research identified a coherent strategy to leverage existing and emerging SEMs including mapping the inherent complexity to the ability of a SEM to address complexity based on the Cynefin sense-awareness framework.
Next, Chapter 3, SEMDAM Methodology, will describe the specific methodology for utilization of SEMDAM followed by an empirical case study demonstrating SEMDAM applied to a significant problem.
89
3 SEMDAM Methodology
It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience. (Einstein, 1934)
The chapter introduces SEMDAM and describes how and why SEMDAM works as a basis for subsequent sections: Chapter 4, SEMDAM Applied to an Empirical Case Study;
Chapter 5, Synthesis & Discussion; and, Chapter 6, Conclusions. Estefan defined methodology as “a collection of related processes, methods, and tools.” (Estefan, 2007)
Piaszczyk wrote “a methodology is essentially a ‘recipe’ and can be thought of as the application of related processes, methods, and tools to a class of problems that all have something in common.” (Piaszczyk, 2011) McGregor and Murnane wrote that “scholars often use the terms methodology and method interchangeably” adding “{methodology} refers to philosophy and {method} refers to technical procedures applied to conduct research.” (McGregor & Murnane, 2010) McGregor and Murnane provide the definition of methodology used to organize and present this research:
The word methodology comprises two nouns: method and ology, which means a branch of knowledge; hence, methodology is a branch of knowledge that deals with the general principles or axioms of the generation of new knowledge. It refers to the rationale and the philosophical assumptions that underlie any natural, social or human science study, whether articulated or not. Simply put, methodology refers to how each of logic, reality, values, and what counts as knowledge inform research. (McGregor & Murnane, 2010)
Section 3.1, Research Design, describes the methodology used for the design of this research including formulation, alternative approaches considered, and validation.
Section 3.2, SEMDAM Introduction, presents SEMDAM including the processes that comprise the model. Section 3.3, Attribute Selection, describes attribute types and data qualification methods. Section 3.4, Statistical Model Selection, describes statistical methods including linear regression, analysis of variance, and logistic regression.
90
3.1 Research Design
Using the definitions of research methods by Salkind, this research design is a combination of nonexperimental research including description, correlational, and qualitative methods. (Salkind, 2012, p. 11) Identifying and describing Class of System
Problems (COSP) and the current state of System Engineering Methods (SEM)s at the time of this study is based on the descriptive research method performed by review of literature. Looking for statistically significant relationships between variables to measure the accuracy of a prediction or perception is based on the correlational research method.
Salkind wrote “correlational research describes the linear relationship between two or more variables without any hint of attributing the effect of one variable on another.”
(Salkind, 2012, p. 203) Using the Cynefin framework to sense complexity is a typological analysis based on the qualitative research method. Salkind wrote that “qualitative research is social or behavioral science research that explores the processes that underlie human behavior” adding that “qualitative research is not just an alternative to quantitative research; it is a different approach that allows you to ask and answer different types of questions.” (Salkind, 2012, p. 213)
Brown wrote that good research design, like good systems engineering, requires:
1. an understanding of the context in which the research is being done and of the research requirements and aims;
2. an understanding of the concepts and theoretical principles upon which the research design will be based;
3. the consideration of and selection between alternative design options; and,
4. an early consideration of how the research will be validated, such that planning for validation permeates the research design at all stages and does not become an afterthought. (Brown, 2009)
91
The research context is presented in Section 3.1.1, Research Context. The understanding of the concepts and theoretical principles is described in Section 3.1.2,
Methodological Formulation. Consideration and discussion of alternate approaches is presented in Section 3.1.3, Alternatives Considered. Section 3.1.4, Plans for Validation, addresses considerations for validation of this research.
3.1.1 Research Context
This research applies the Cynefin sense-awareness framework to provide a diagnostic assessment of the class of system problem (COSP) by analysis of evidence of a priori prediction and/or a posteriori perception of cause and effect as a basis for recommending a SEM appropriate for the COSP. This research is not the first to consider using the
Cynefin framework to sense the complexity of an environment.
Sheard proposed using the Cynefin framework to “determine what your situation is”
{where situation is analogous to COSP} as a recommended principle for management of complex adaptive systems engineering efforts writing that known situations and knowable situations “are the only toolkits that many managers have. Fortunately, Kurtz and
Snowden produced two more” listing complex situations and chaotic situations. (Sheard
S. , Complex Adaptive Systems in Systems Engineering and Management, 2009, p. 1314)
Gardner used the Cynefin framework to “provide knowledge workers and information architects with a framework” as a basis for “development of a suite of Enterprise 2.0” collaboration tools. (Gardner, 2013) French proposed use of the Cynefin framework to identify situation and issues for categorizing decision support options because it benefits analysts by “helping identify what methodologies might be suitable for the problem.”
(French, 2012)
92
Van Beurden et al., applied the Cynefin framework to “identify approaches appropriate to the level of complexity” applying it to health promotion for “planning or reviewing an entire portfolio of projects to enable emergent {health} practices … while still rolling out standardized, evidence-based strategies.” They observed that using “the
{Cynefin} framework helps those addressing complex issues to communicate the value and meaning of their work within a system that largely privileges a reductionist approach.” (Van Beurden, Kia, Zask, Dietrich, & Rose, 2011)
Recently, the INCOSE Complex Systems Working Group “working at the intersection of complex systems sciences and SE” applied the Cynefin Framework to understand the “classes of systems problems” (COSP) to “facilitate the identification of tools and techniques to apply in the engineering of complex systems” as shown in Figure
3-1. (McEver, 2016)
Figure 3-1: The INCOSE Complex Systems Working Group Use of the Cynefin Framework to Identify Classes of Systems Problems (McEver, 2016) The INCOSE representation cited the source as: Kurtz and Snowden, “The new dynamics of strategy: Sensemaking in complex and complicated world,” IBM Systems Journal 42 (3); 2003.
93
Members of the INCOSE Complex Systems Working Group include: Sarah Sheard,
Eric Honour, Jimmie McEver, Dorothy McKenney, Alex Ryan, Stephen Cook, Duane
Hybertson, Joseph Krupa, Paul Ondrus, Robert Scheurer, Janet Singer, Joshua Sparber, and Brian White. (McEver, 2016)
3.1.2 Methodological Formulation
This section presents the basic theory on how and why SEMDAM works and how application of SEMDAM will result in a recommendation of a complexity appropriate
SEM. Reasoning is the process of using existing knowledge to draw conclusions, make predictions, or construct explanations. Three main methods of reasoning are: deductive reasoning, inductive reasoning, and abductive reasoning. SEMDAM relies on abductive reasoning.
3.1.2.1 Abductive Reasoning
Abductive reasoning is a form of logical inference that starts with an observation then attempts to find the simplest and most likely explanation given a set of evidence. The
New World Encyclopaedia wrote:
Abduction, or inference to the best explanation, is a method of reasoning in which one chooses the hypothesis that would, if true, best explain the relevant evidence. Abductive reasoning starts from a set of accepted facts and infers most likely, or best, explanations. Aristotle discussed abductive reasoning (apagoge, Greek) in his Prior Analytics. The concept of abduction is applied beyond logic to the social sciences and the development of artificial intelligence.
Abduction means determining the precondition. It is using the conclusion and the rule to assume that the precondition could explain the conclusion. Example: "When it rains, the grass gets wet. The grass is wet, it must have rained." Diagnosticians and detectives are commonly associated with this style of reasoning.
It allows inferring a as an explanation of b. Because of this, abduction allows the precondition of “a entails b” to be inferred from the consequence b. Deduction and abduction thus differ in the direction in which a rule like “a entails b” is used for inference.
94
The philosopher Charles Peirce introduced abduction into modern logic. In his works before 1900, he mostly uses the term to mean the use of a known rule to explain an observation; for example, “if it rains the grass is wet,” is a known rule used to explain that the grass is wet. In other words, it would be more technically correct to say, "If the grass is wet, the most probable explanation is that it recently rained."
He later used the term to mean creating new rules to explain new observations, emphasizing that abduction is the only logical process that actually creates anything new. Namely, he described the process of science as a combination of abduction, deduction, and induction, stressing that new knowledge is only created by abduction. (New World Encyclopedia, 2017)
Bradford provides on overview of abductive reasoning that highlights its’ applicability for this research writing:
Abductive reasoning usually starts with an incomplete set of observations and proceeds to the likeliest possible explanation for the group of observations. It is based on making and testing hypotheses using the best information available. It often entails making an educated guess after observing a phenomenon for which there is no clear explanation. (Bradford, 2017)
The Stanford Encyclopaedia of Philosophy provides the following example to illustrate the use of abduction in science:
At the beginning of the nineteenth century, it was discovered that the orbit of Uranus, one of the seven planets known at the time, departed from the orbit as predicted on the basis of Isaac Newton’s theory of universal gravitation and the auxiliary assumption that there were no further planets in the solar system. One possible explanation was, of course, that Newton’s theory was false. Given its great empirical successes for (then) more than two centuries, that did not appear to be a very good explanation. Two astronomers, John Couch Adams and Urbain Leverrier, instead suggested (independently of each other but almost simultaneously) that there was an eighth, as yet undiscovered planet in the solar system; that, they thought, provided the best explanation of Uranus’ deviating orbit. Not much later, this planet, which is now known as “Neptune,” was discovered. (Stanford Encyclopedia of Philosophy, 2017)
Astronomers would have been more surprised to learn that Newton was wrong than to learn that there is another planet in our solar system. This is called the Surprise Principle.
Sober described the implausibility of occurrence as the Surprise Principle stating that “the
Surprise Principle describes what it takes for an observation to strongly favor one hypothesis over another” as follows:
95
The Surprise Principle: An observation O strongly supports H1 over H2 if both the following conditions are satisfied, but not otherwise: (1) if H1 were true, O is to be expected; and (2) if H2 were true, O would have been unexpected.
The Surprise Principle involves two requirements. It would be more descriptive, though more verbose, to call the idea the No Surprise/Surprise Principle. The Surprise Principle describes when an observation O strongly favors one hypothesis (H1) over another (H2). There are two requirements:
(1) If H1 were true, you would expect O to be true.
(2) If H2 were true, you would expect O to be false.
That is (1) if H1 were true, O would be unsurprising; (2) if H2 were true, O would be surprising. The question to focus on is not whether the hypotheses (H1 or H2) would be surprising. The Surprise Principle has nothing to do with this. To apply the Surprise Principle, you must get clearly in mind what the hypotheses are and what the observation is. The Surprise Principle gives advice on what a hypothesis must do if it is to be strongly supported by the predictions is makes. First, the hypotheses shouldn’t make false predictions. Second, among the true predictions the hypothesis makes, there should be predictions we would expect not to come true if the hypothesis were false. (Sober, 2012, p. 30)
According to the Stanford Encyclopaedia of Philosophy, a formulation of abduction follows the format:
Given evidence E and candidate explanations H1, …, Hn of E, ABDX (2) infer the truth of that Hi which best explains E.
(Stanford Encyclopedia of Philosophy, 2017)
The use of abductive logic induces several requirements including defining acceptable evidence ‘given evidence E’ and appropriate ‘candidate explanations H1, …,
Hn of E’ which are both described below.
3.1.2.2 Defining candidate explanations H1, …, Hn
This research applied the Cynefin sense-awareness framework as a basis for defining the “candidate explanation” used to describe COSP. Louis wrote:
Sense making can be viewed as a recurring cycle comprised of a sequence of events occurring over time. The cycle begins as individuals form unconscious or conscious
96
anticipations and assumptions, which serve as predictions about future events. Subsequently, individuals experience events that may be discrepant from predictions. Discrepant events, or surprises, trigger a need for explanation, or post-diction, and correspondingly, for a process through which interpretations of discrepancies are developed. Interpretation, or meaning, is attributed to surprises. Based on the attributed meanings, any necessary behavioral responses to the immediate situation are selected. Based on attributed meanings, any understandings of actors, actions, and settings are updated and predictions about future experiences in the setting are revised. The updated anticipations and revised assumptions are analogous to alterations in cognitive scripts. (Louis, 1980)
The Cynefin framework contains five complexity domains that categorize complexity by the nature of the relationship between cause and effect. Specifically, SEMDAM uses the Cynefin framework to sense COSP by analysis of evidence of either an a priori prediction and/or a posteriori perception of cause and effect.
Per Section 2.5, Cynefin sense-making Framework, each Cynefin complexity domain has a distinctive cause and effect relationship. Ordering and evaluating the candidate explanations from Disorder to Chaos allows use of the Surprise Principle for a given set of evidence. For example, if COSP is assumed Known, PM/SEM should be more surprised that a previous prediction of system outcomes is found to be incorrect than if the prediction is correct. The candidate explanations H1, …, Hn align with the COSP defined in Table 2-11, Proposed Alignment Between Inferred COSP and Complexity
Appropriate SEM, of Disorder, Known, Knowable, Complex and Chaos.
3.1.2.3 Inferring COSP ‘Given Evidence E’
SEMDAM infers COSP based on the nature of evidence and actual evidence described in Table 3-1 by measurement of user-defined attributes selected during Step 5 in Section 3.2.5. Specific details on selecting attributes is described in Section 3.3.
Details on selecting and applying statistical methods is described in Section 3.4.
97
Table 3-1: Definition of Evidence E for SEMDAM Class of System Nature of Evidence Evidence E No evidence of system Not understood or engineering and Not conscious of alternatives expressed management (SE&M) Simple and Statistically significant predictive a priori cause and effect complicated systems analytics of selected attributes a priori cause and effect Statistically significant predictive Massively- that requires expert analytics of selected attributes complicated systems knowledge or special developed by experts or resulting investigation from special studies a posteriori cause and Statistically significant descriptive Complex systems effect analytics of selected attributes Neither predictive nor descriptive Absence of cause and Chaotic systems analytics reach the level of effect statistical significance
The relationship between SEMDAM COSP and the Nature of Evidence is verified using the description method of Section 2.5, Cynefin sense-making Framework.
3.1.3 Alternatives Considered
Table 3-2 contsins some of the alternatives considered in developing SEMDAM.
Table 3-2: Summary of Alternatives Considered SEMDAM SEMDAM Alternative Describe in Component Selection Considered Section MITRE Enterprise Complexity Cynefin Systems Engineering 3.1.3.1 Framework ESE Profiler MITRE Systems Complexity Cynefin Engineering Activities 3.1.3.2 Framework (SEA) Profiler Types of System Complexity Cynefin Complexity 3.1.3.3 Framework Framework Abductive Reasoning model Deductive Reasoning 3.1.3.4 reasoning Abductive Reasoning model Inductive Reasoning 3.1.3.5 reasoning
98
3.1.3.1 MITRE Enterprise Systems Engineering ESE Profiler
Stevens wrote that “engineering of these large scales, complex systems must take into account the specific characteristics of the system and the context in which it is being engineered, developed, and acquired and in which it will operate” adding “The first step is to be able to characterize the systems in its context.” Stevens wrote “This complexity framework defines the problem context along multiple dimensions taking into account both the nature of the decision makers and the nature of the system itself.” The ESE
Profiler is based on a COSP of TSM, SoSM, and ESM which are identified in the polar diagram output, shown in Figure 3-2, as the three concentric rings from the center reflecting “increasing levels of complexity and uncertainty.” (Stevens, 2008)
The aspects of ESE Profiler that are similar with and provide validation of SEMDAM are: (1) agreement that “system-based problem-solving methodologies should be selected based on the context of the problem” (i.e., assess COSP and align with SEM); (2) use of a
COSP that is in line and in order with SEMDAM (i.e., TSM is less complex that SoSM which is less complex that ESM); (3) acknowledgement that the assessment measures complexity based on both system and observer; and, (4) agreement that a complexity assessment provides value as either a one-time self-assessment or a recurring situational modeling tool. ESE Profiler was not selected as the complexity framework due to the inability to verify, validate or in any way independently measure the statistical significance of the responses to strategic context, implementation context, stakeholder context, and systems context.
99
Figure 3-2: MITRE's Enterprise Systems Engineering Profiler is organized into four quadrants and three rings (Stevens, 2008)
3.1.3.2 MITRE Systems Engineering Activities (SEA) Profiler
White wrote that the SEA Profiler was constructed for application in complex and enterprise systems engineering endeavors and is “primarily intended to be used by a systems engineer, program manager, or project leader, for characterizing the systems engineering (SE) being done on their program/project within a given environment and timeframe.” (White B. E., Systems Engineering Activities (SEA) Profiler, 2010) While similar in structure to the Enterprise Systems Engineering (ESE) Profiler (Section 3.1.3.1, above), White describes some of the distinctions when he wrote:
The SEA Profiler can be used in conjunction with the Enterprise Systems Engineering (ESE) Profiler (Stevens 2008) which is primarily oriented toward characterizing the nature and degree of difficulty of the environment surrounding the program/project effort. In short, the ESE Profiler helps you characterize your situation, and the SEA
100
Profiler helps you characterize what you're doing about it. (White B. E., Systems Engineering Activities (SEA) Profiler, 2010)
The SEA Profiler involves assessment of 9 groups of typical SE activities – define the system problem, analyze alternatives, utilize a guiding architecture, consider technical approaches, pursue solutions, manage contingencies, develop implementations, integrate operational capabilities, and learn by evaluation effectiveness – by selecting one of five
COSP levels from the following choices:
A. Left end of slider – activities are intended to characterize and be most closely associated with the “traditional” practice of conventional or prescriptive SE utilizing the best-known techniques;
B. Left Intermediate Interval –activities mostly with a “directed” system of systems (SoS);
C. Center Intermediate Interval – activities are intended to characterize and be most closely associated with an “acknowledged” SoS;
D. Right Intermediate Interval – activities associated mostly with a “collaborative” SoS; and
E. Right End of Slider – activities associated mostly with a “virtual” SoS. (White B. E., Systems Engineering Activities (SEA) Profiler, 2010)
The result of using the SEA Profiler is a situational awareness assessment dashboard shown in Figure 3-3.
The aspects of SEA Profiler that are similar with and provide validation of SEMDAM are: (1) agreement that the intended users are systems engineers, program managers or project leaders (2) use of a COSP that is in line and in order with SEMDAM (e.g., TSM is less complex that SoSM); and (3) agreement on the need to assess the COSP and align with the SEM. SEA Profiler was not selected as the complexity framework due to the inability to verify, validate or in any way independently evaluate or measure the
101
statistical significance of the responses to the 9 typical SE activities and the limitation that SEA Profiler only considers TSM and SoSM while not considering ESM or CSM.
Figure 3-3: Example Situational Assessment from SEA Profiler (White B. E., Systems Engineering Activities (SEA) Profiler, 2010)
3.1.3.3 Types of System Complexity Framework
In her dissertation (Sheard S. A., Assessing the Impact of Complexity Attributes on
System Development Project Outcomes, 2012) Sheard wrote that her research was based on a Types of System Complexity framework previously introduced by Sheard and refined by Sheard and Mostashari. Sheard’s initial resilience framework consisted of five aspects that are “often part of resilience definitions in literature: time periods, system, event, required action, and preserved qualities; and five prescriptive principles to improve resilience: system, organizational, economic, ecological, political, and socio-ecological.”
102
(Sheard S. , A Framework for System Resilience Discussions, 2008) Sheard and
Mostashari updated the framework for types of complexity that “includes three types of structural complexity (size, connectivity, and architecture), two types of dynamic complexity (short-term and long-term), and one additional type, socio-political complexity.” (Sheard & Mostashari, A Complexity Typology for Systems Engineering,
2010) Sheard’s Types of System Complexity Framework, used as the basis for her dissertation, was based on six types of complexity including: structural complexity (size)
{SS}; structural complexity (connectivity) {SC}; structural complexity (inhomogeneity)
{SI}; dynamic complexity (short term) {DS}; dynamic complexity (long term) {DL}; and Socio-political complexity {SP} as shown in Figure 3-4.
Figure 3-4: Sheard's Types of Complexity Framework Applied to Entities (Sheard S. A., Assessing the Impact of Complexity Attributes on System Development Project Outcomes, 2012, p. 58)
Sheard’s dissertation described the intended primary output as “a list of complexity measures that are shown to have a statistically significant impact on specific project
103
outcomes, including cost overrun, schedule delay, and performance shortfall, and possibly others” by looking for association between “potentially complex attributes and system outcomes” eventually surveying senior INCOSE engineers on 52 input variables and their relationships to other variables in 10 groups: Project management (10 variables); Project basics (9 variables); Size (9 variables); Requirements (8 variables);
Stakeholders (10 variables); Conflict (4 variables); Uncertainty (2 variables); Changes (5 variables); Skills (2 variables); and, Precedence (4 variables) eventually collecting 75 usable responses from the 121 received. (Sheard S. A., Assessing the Impact of
Complexity Attributes on System Development Project Outcomes, 2012)
Sheard’s Types of System of Complexity Framework was considered for use because it attempted to identify predictive complexity measurements. Sheard wrote “While no causality was demonstrated, it is shown that projects with lower values of three specific complexity measures have better outcomes of all kinds. Those measures are the number of difficult requirements, the “cognitive fog” experienced within the project, and the
‘stakeholder involvement’ characteristics;” however, Sheard noted limitations including qualitative not quantitative results and observed that by its nature, a retrospect survey is only able to assess projects that have finished. (Sheard S. A., Assessing the Impact of
Complexity Attributes on System Development Project Outcomes, 2012). This complexity framework was not selected due to the reliance on subject judgement which does not allow for independent validation or verification of response and because of the retrospective nature of the survey.
3.1.3.4 Deductive Reasoning
The New World Encyclopaedia and Trochim describe deductive reasoning as follows:
104
Deduction means determining the conclusion. It is using the rule and its precondition to make a conclusion. Example: "When it rains, the grass gets wet. It rains. Thus, the grass is wet." Mathematicians are commonly associated with this style of reasoning. It allows deriving b as a consequence of a. In other words, deduction is the process of deriving the consequences of what is assumed. Given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. A deductive statement is based on accepted truths. For example, all bachelors are unmarried men. It is true by definition and is independent of sense experience. (New World Encyclopedia, 2017)
{As shown in Figure 3-5} Deductive reasoning works from the more general to the more specific. Sometimes this is informally called a "top-down" approach. We might begin with thinking up a theory about our topic of interest. We then narrow that down into more specific hypotheses that we can test. We narrow down even further when we collect observations to address the hypotheses. This ultimately leads us to be able to test the hypotheses with specific data – a confirmation (or not) of our original theories (Trochim, 2006)
Figure 3-5: Deductive reasoning begins with Theory and is therefore dependent on existence of acceptable theory (Trochim, 2006)
Since deductive arguments attempt to demonstration that a conclusion must be true provided that the premises are true, the primary objection to the application of deductive reasoning here are the absence of SE premises that are generally accepted as true (i.e., SE
“accepted truths” or SE industry accepted theory). Specific issues with foundational SE theory that impact development of a diagnostic assessment model include the lack of industry-wide recognition on the naming, number or need for SEMs other than traditional or classical SE.
105
3.1.3.5 Inductive Reasoning
The New World Encyclopaedia and Trochim describe inductive reasoning as follows:
Induction means determining the rule. It is learning the rule after numerous examples of the conclusion following the precondition. Example: "The grass has been wet every time it has rained. Thus, when it rains, the grass gets wet." Scientists are commonly associated with this style of reasoning. It allows inferring some a from multiple instantiations of b when a entails b. Induction is the process of inferring probable antecedents as a result of observing multiple consequents. An inductive statement requires perception for it to be true. For example, the statement, "it is snowing outside" is invalid until one looks or goes outside to see whether it is true or not. Induction requires sense experience. (New World Encyclopedia, 2017)
{As shown in Figure 3-6} Inductive reasoning works the other way, moving from specific observations to broader generalizations and theories. Informally, we sometimes call this a “bottom up” approach. In inductive reasoning, we begin with specific observations and measures, begin to detect patterns and regularities, formulate some tentative hypotheses that we can explore, and finally end up developing some general conclusions or theories. (Trochim, 2006)
Figure 3-6: Inductive reasoning begins with observation and is therefore dependent on existence of measurement (Trochim, 2006)
Since inductive reasoning attempts to drive theory from some number of observations, validity of results relies on inferential statistics. According to Salkind,
“inferential statistics are used to infer something about the population from which the sample was drawn based on the characteristics of the sample.” (Salkind, 2012, p. 177)
Salkind wrote “the central limit theorem is in many ways the basis for inferential statistics” adding “the critical link between obtaining the results from the sample and
106
being able to generalize the results to the population is the assumption that repeated sampling from the population will result in a set of scores that are representation of the population.” (Salkind, 2012, p. 178) French wrote:
Scientific induction and statistical inference begins with the assumption that we have made enough progress to have some understanding of cause and effect, perhaps even a hypothesis or putative model. The methodology of science has focused much more on the testing and validation of models and theories and the estimation of parameters, that is, those processes that fall in the known and knowable spaces. Repeatability has come to lie at the heart of the scientific induction. Scientific models and theories can only be validated if they can be tested again and again in identical circumstances and shown to explain and predict system behaviours. (French, 2012)
The central limit theorem states that the sampling distribution of the mean of any independent, random variable will be normal or nearly normal, if the sample size is large enough. How large is large enough? The answer depends on two factors: requirements for accuracy and shape of the underlying population. The more closely the sampling distribution needs to resemble a normal distribution (i.e., accuracy and confidence), the more sample points will be required. The more closely the original population resembles a normal distribution (e.g., shape), the fewer sample points will be required. In practice, statisticians suggest a sample size of between 30 and 40 samples when the population distribution is roughly bell-shaped. But if the original population is distinctly not normal
(e.g., is badly skewed, has multiple peaks, and/or has outliers), researchers like the sample size to be even larger.
The primary objection to the application of inductive reasoning here is the inability to define an attribute that measures system complexity in situ complicated by the inability to obtain, assuming a system complexity is identified, a large enough set of independent and identically distributed samples from observation of non-static, non-repeatable program environments at unique points in time.
107
3.1.4 Plans for Validation
This section describes plans for validation of the general model SEMDAM. Section 4,
SEMDAM Applied to an Empirical Case Study, provides validation of SEMDAM via an empirical case study demonstrating its’ use. IEEE defines verification as the
“confirmation, through the provision of objective evidence, that specified requirements have been fulfilled” while defining validation as “confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 10) Verification using methods inspection, analysis, demonstration, and/or test ensures that the system is made “right”, while validation using comparative assessment ensures that the “right” system is made.
According to the New World Encyclopaedia:
Abductive validation is the process of validating a given hypothesis through abductive reasoning. Under this principle, an explanation is valid if it is the best possible explanation of a set of known data. The best possible explanation is often defined in terms of simplicity and elegance (such as Occam's razor). (New World Encyclopedia, 2017)
Checkland wrote that the basis for acceptance of a scientific finding may range from the strong criteria of repeatability – “for findings to be accepted as part of the body of
‘scientific knowledge’ they have to be repeatable, time and again, by scientists other than those who first discovered them” to the weak criteria of plausibility – “does this finding make a believable story?” (Checkland, 2000) To support the need for independent validation of this model, this paper incorporates advice from Checkland:
Research should be conducted in such a way that the whole process is subsequently recoverable by anyone interested in critically scrutinizing the research. This means declaring explicitly, at the start of the research, the intellectual frameworks and the process of using them which will be used to define what counts as knowledge in this piece of research. (Checkland, 2000)
108
The theoretical foundation of SEMDAM is based on the ontological foundation inherent within the Cynefin complexity framework. Dietz et al., wrote “ontological theories are theories about the nature of things. They address explanatory and/or predictive relationships in observed phenomena;” adding:
Ontological theories are valuated by their soundness and their appropriateness. The soundness of an ontological theory is established by its being rooted in sound philosophical theories. The appropriateness of an ontological theory is established by the evaluation of its practical application, e.g., through expert judgments. (Dietz, et al., 2013)
Sections 3.1.1, Research Context; 3.1.2, Methodological Formulation; and 3.1.3,
Alternatives Considered, provided detailed explanation on the underlying theories to support validate of the soundness of SEMDAM.
Validation of the appropriateness of SEMDAM requires comparative analysis on:
SEMs used, complexity framework used, and independent validation of the theory that
COSP impacts program execution.
Comparative analysis on the fit for purpose and fit for usage of SEMs is provided in
Section 2, Literature Research. Comparative analysis of the appropriateness of SEMs used is graphically depicted Figure 3-7, by Dr. Brian White showing how practice drives theory in the years since 1950 and in Figure 3-8, by Drs Gorod, Ghandi, White, Ireland, and Sauser describing the Relative Difficulty of Engineering Various Types of Systems.
109
System of Systems Engineering
Enterprise Systems Engineering Concept Practice Complex Systems Theory Engineering
Systems Engineering
0 20 40 60 80
Years Since 1950
Figure 3-7: White on How Practice Drives Theory in the Years Since 1950 (White B. E., On a Maturity Model for Complexity, Complex Systems, and Complex Systems Engineering, 2016)
Both representations by individual or groups of recognized experts depict the same four SEMS used in SEMDAM.
The most general Complex System might be Degree of Difficulty Can the most difficult type of system to engineer. Increase in This Direction Complex System
Enterprise
System of Systems
System
Figure 3-8: Relative Difficulty of Engineering Various Types of Systems (Gorod, Gandhi, White, Ireland, & Sauser, 2015, p. 26)
110
Appropriateness of the Cynefin framework to determine COSP is described in
Sections 2.5, Cynefin sense-making Framework and 3.1.1, Research Context.
Alternatives complexity frameworks considered are described in Section 3.1.3.
Independent validation that COSP impacts project execution is provided by Sheard who wrote, “projects with a lower value of complexity had better outcomes … than the higher complexity set of projects” (Sheard S. A., Assessing the Impact of Complexity
Attributes on System Development Project Outcomes, 2012, p. 108) and Roberts,
Mazzuchi and Sarkani who wrote “Results from 526 programs analyzed and among experts surveyed, suggest that current processes and/or cost estimates for the design and development of major weapons programs are less suited for complex systems” adding
“SoS programs have a significantly higher likelihood of overrunning cost than PLA or
SYS programs; and PLA programs have a higher likelihood of overrunning than SYS programs.” (Roberts, Mazzuchi, & Sarkani, 2016) In the Roberts study, COSP was ranked as “System, Platform, and SoS” increasing from System to SoS.
This research design is a combination of nonexperimental research including description, correlational, and qualitative methods observational study and not design of experiments. Maier provided verification of the use for nonexperimental versus experiment research for system engineering studies when he wrote:
It would be desirable to test the proposed heuristics in a broader way through detailed case study. As in most systems engineering studies, formal experiment is not really possible. We don’t build duplicate complex systems by different methods just to see what would happen. We can look retrospectively at built systems to test the applicability of heuristics, however. (Maier, Architecting Principles for Systems-of- Systems, 1998)
111
3.2 SEMDAM Introduction & Systematic Description
SEMDAM, based on the Cynefin sense-awareness framework, provides a recommendation for an appropriate SEM and/or a periodic reassessment of the appropriateness of an in situ SEM using a Diagnostic Assessment Model (DAM).
SEMDAM measures PM/SEM understanding of system complexity by evaluating statistically significant association of a priori prediction of system output or a posteriori perception of system response. The distinction of diagnostic versus decision is because the model is based on abduction, or inference to the best explanation, which is a logical method of reasoning in which one chooses the hypothesis that would, if true, best explain the relevant evidence. The delineation of assessment versus analysis is based on the understanding of the difference between the two. According to the Merriam-Webster dictionary, analysis is defined as “a detailed examination of anything complex in order to understand its nature or to determine is essential features” (Merriam-Webster, 2017) while assessment is defined as “the action or an instance of making a judgement about something.” (Merriam-Webster, 2017)
Figure 3-9: System Engineering Method Diagnostic Assessment Model (SEMDAM) shows the process flowchart of SEMDAM. Figure 3-9 incorporates the abductive logic model and addresses the initial test for Disorder. Figure 3-9 solidifies the relationship to the Cynefin framework by highlighting the search for cause and effect explicitly described in “Obtain Evidence for Cause & Effect Analysis.” Figure 3-9 shows the expected output is a recommendation for a SEM appropriate to the COSP.
SEMDAM is intended for use by Program Management (PM) and/or SE Management
(PM/SEM) in service organizations, system development organizations, system
112
integrators (SI), or lead system integrators (LSI) that execute programs or projects that include the requirement for a system or software development lifecycle (S/SDLC) to deliver systems or services. SEMDAM supports two use cases:
(1) An aperiodic assessment tool for initial or one-time use; and,
(2) A periodic situational model to reassess appropriateness of an in situ SEM.
The following sections describe the SEMDAM methodology including specification of the COSP hypotheses using the formulation of abductive logic rules described in
Section 3.1.2.1, Abductive Reasoning above. Depending on results, there is potential for up to five abductive logic tests – each identified using “ABDX.”
This research is based on observational study and not design of experiments; therefore, SEMDAM’s statistical models look for correlation rather than causality.
SEMDAM is based on abductive reasoning, also referred to as diagnosis, which typically begins with an incomplete set of observations and proceeds to the likeliest possible explanation. Analogous to a medical diagnosis, SEMDAM does not guarantee selection of the “best” SE method. SEMDAM was influenced by the research into the science of administrative behavior and executive decision making by Herbert Simon in the 1960s and has adopted the goal of satisfactory rather optimum. (Checkland, 2000) SEMDAM does not claim to recommend an optimal SEM; rather, SEMDAM provides a recommendation of a satisfactory SEM that is appropriate for the inferred COSP. Each of the tasks or hypothesis tests are described using an Input – Process – Output frame of reference, and giving the potential for branching logic, includes a Next section with directions. Each task or hypothesis is introduced with an Overview section that provides reference to related sections of INCOSE’s SEH (2014 edition) or IEEE 15288:2015(E).
113
114
Figure 3-9: System Engineering Method Diagnostic Assessment Model (SEMDAM)
3.2.1 Step 1 – Gather Evidence of SE&M Activity
Overview – The objective of SEMDAM is to provide a recommendation for a complexity appropriate SEM based on COSP. Implicit in the name COSP is the fundamental requirement to obtain a clear and concise understanding of the System
Problem under evaluation or development. Shenhar and Sauser wrote that “no complex system can be created by a single person thus systems engineering is strongly linked to management. We therefore need to combine the two fields and talk about systems engineering management.” (Shenhar & Sauser, 2009, p. 117) Their definition of system engineering management as “the application of scientific, engineering, and managerial efforts” included the requirement to “Work with clients to ensure that the system created is qualified to address required needs and solve clients’ problems.” (Shenhar & Sauser,
2009, p. 119)
Input – External stakeholders, PM or SE&M leadership initiate an assessment.
Process – INCOSE wrote “Every man‐made system has a life cycle, even if it is not formally defined” adding “A life cycle can be defined as the series of stages through which something (a system or manufactured product) passes.” (INCOSE SEH, 2015, p.
25) IEEE wrote “The purpose of the Life Cycle Model Management process is to define, maintain, and assure availability of policies, life cycle processes, life cycle models, and procedures for use by the organization.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 23)
Both INCOSE and IEEE expect the use of a life cycle model and associated artifacts.
This step searches for any artifacts of SE&M activity or external evidence of SE&M documentation including discussion of COSP if available.
Output – Evidence of SE&M activity, if found.
115
Next – Proceed to Section 3.2.2, Step 2 – Evaluate Hypothesis ABD1 & Interpret if
COSP is Disorder.
3.2.2 Step 2 – Evaluate Hypothesis ABD1 & Interpret if COSP is Disorder
Overview – As an initial test, SEMDAM seeks evidence of consideration, use or attempted use of systems engineering and management (SE&M) methodologies. Artifacts that would indicate that SE&M has been considered include identification or consideration of a SDLC, listing or consideration of requirements, evidence of analysis of alternatives or the existence of problem analysis artifacts such as models or simulations.
Previous assessments of COSP would indicate that SE&M has been considered so previous assessments would be evidence as well.
Input – Evidence of SE&M activity from Step 1, if found; and, Candidate
Expressions are: Disorder, Known, Knowable, Complex, Chaos.
Process – Using the lack of evidence provided and candidate explanations, evaluate hypothesis ABD1.
Given evidence (E => No evidence of system engineering and
management {SE&M}) and candidate explanations: ABD1 (3) Disorder, Known, Knowable, Complex, or Chaos of E,
infer the truth of that Disorder best explains E.
Output – If we fail to reject ABD1, infer that COSP is Disorder. Proceed to Step 10.
Next – If we reject ABD1, infer COSP is NOT Disorder. Assume COSP is Known and proceed to Step 3.
116
3.2.3 Step 3 – IV&V Program and SE Management
Overview – This task performs independent verification and validation (IV&V) of the
SE activities in IEEE 15288 paragraph 6.2.1, Life cycle model management process, and/or INCOSE paragraph 7.1, Life Cycle Model Management process. Since COSP is
Not Disorder, this task assumes IEEE 15288 6.4.2, Stakeholder needs and requirements definition process, or a similar set of activities, has previously taken place and their artifacts are available for review and analysis. Shenhar and Sauser wrote “the SE efforts
… start with the identification of need … which must be matched with a technical feasibility of a system that will be capable of addressing this need.” (Shenhar & Sauser,
2009, p. 120) Gilbertson, Tanju and Eveleigh wrote “SEM ‘gets the big picture’ ensuring that the SE team observes the need, is properly oriented appropriately to focus on the opportunity, makes decisions when necessary, and spurs the SE team into action understanding expectations and constraints. (Gilbertson, Tanju, & Eveleigh, 2017)
Input – Rejection of hypothesis ABD1. Previously identified Stakeholder Needs.
Process – This task independently verifies and validates that “stakeholder requirements are defined considering the context of the system-of-interest with the interoperating systems and enabling systems.” (ISO/IEC/IEEE 15288:2015(E), 2015, p.
51) If the stakeholder needs are not defined, this task would then identify the stakeholders and their needs from analysis of the environment, review of external documentation or other means. MITRE wrote “In the context of a systems engineering life cycle, an operational needs assessment forms the basis for defining requirements for a program and a system” adding that the assessment must “Determine the specific requirements of the
117
needs assessment process that apply” and “Identify specific stakeholders … including their responsibility, goals, and roles/relationships.” (MITRE, 2014, p. 281)
Output – Validated Stakeholder Need(s).
Next – Proceed to Step 4
3.2.4 Step 4 – IV&V Business or Mission Analysis
Overview – This task performs IV&V of the SE activities described in IEEE section
6.4.1, Business or Mission Analysis process, and/or INCOSE section 4.1, Business or
Mission Analysis process. IEEE wrote that the purpose of BMA is to, “define the business or mission problem or opportunity, characterize the solution space, and determine potential solution class(es) that could address a problem or take advantage of an opportunity” noting that “this process interacts with the organization’s strategy, which is generally outside the scope of 15288.” (ISO/IEC/IEEE 15288:2015(E), 2015, p. 48)
MITRE wrote that BMA should “Define the sponsor’s and customer’s problem or opportunity from a comprehensive, integrated perspective.” (MITRE, 2014, p. 16) IEEE wrote the BMA process “provides for the definition of the problem space and characterization of the solution space, including the relevant trade-space factors and preliminary life cycle concepts. This includes developing an understanding of the context and any key parameters, such as the critical quality characteristics.” (ISO/IEC/IEEE
15288:2015(E), 2015, p. 95)
Input – COSP is NOT Disorder & assumed Known. Validated Stakeholder Need(s).
Process – This task performs an independent verification and validation (IV&V) that the mission problem or opportunity is noted. If the system problem or opportunity are not defined, this task would then identify the system problems or opportunities from analysis
118
of the environment, review of external documentation or other means. Step 4 ensures that the noted problem or opportunity description has been updated to reflect any subsequent updates or refinements since program inception. INCOSE wrote:
Too often, the system definition is viewed as a linear, sequential, single pass through the processes. However, valuable information and insight need to be exchanged between the processes, in order to ensure a good system definition that effectively and efficiently meets the mission or business needs. The application of iteration and recursion to the life cycle processes with the appropriate feedback loops helps to ensure communication that accounts for ongoing learning and decisions. This facilitates the incorporation of learning from further analysis and process application as the technical solution evolves. (INCOSE SEH, 2015, p. 32)
Output – Validated Problem or Opportunity (i.e., system problem)
Next – Proceed to Step 5.
3.2.5 Step 5 – Obtain Evidence for Cause and Effect Analysis
Overview – The phrase, “if all you have is a hammer, everything looks like a nail” is a criticism of using a familiar approach to solve all problems when it may be more appropriate to use more difficult or less familiar. IEEE wrote “the detail of the life cycle implementation within a project is dependent upon the complexity of the work, the methods used, and the skills and training of personnel involved in performing the work.”
(ISO/IEC/IEEE 15288:2015(E), 2015, pp. 24, 102)
This research focuses on measurement due in large part to the advice of Sage who wrote “success in implementation of systems engineering is critically dependent on the availability of appropriate measurements” adding, “management and measurement are irretrievably interconnected.” (Sage A. P., Systematic Measurements, 2009, p. 575) Sage wrote that, “one major need in all of systems engineering and systems management, at all levels, is to obtain the information and knowledge necessary to organize and direct” adding, “this information can only be obtained through an appropriate program of
119
systematic measurements and development of appropriate models for use in processing this information.” (Sage A. P., Systematic Measurements, 2009, p. 575)
Step 3 – IV&V Program and SE Management and Step 4 – IV&V Business or
Mission Analysis ensure that this task has a clear vison of the intended system avoiding what MITRE called “The most common problem about program performance cited in research … is that the program’s goals/objectives have not been identified” adding “It is impossible to develop measures of progress if we do not know where we are trying to go.” (MITRE, 2014, p. 76)
Step 5 is the core component of SEMDAM where the Cynefin sense-awareness framework is used to categorize the relationship between system input (cause) and system output (effect) by identifying attributes and appropriate statistical models for analysis.
SEMDAM provides a practical approach for recommending a complexity appropriate
SEM among competing alternatives without requiring complete knowledge. There is a substantial distinction between assumed COSP and inferred COSP – assumed COSP is where SEMDAM starts (after ruling out an inferred COSP of Disorder in Step 2) and inferred COSP is the basis for the final activity described in Section 3.2.10, Step 10 –
Recommend SEM based on Inferred COSP.
Input – Rejection of hypothesis ABD1, Validated Stakeholder Needs, Validated
Problem or Opportunity, and assumed COSP.
Process – Identify candidate attributes per Section 3.3, Attribute Selection Method. If
COSP is assumed to be Known, make a prediction for a candidate attribute and then gather data on the candidate attribute. Later, validate the accuracy of that prediction by measuring the statistical significance of the prediction using one of the statistical models
120
in Section 3.4, Statistical Model Selection. If COSP is assumed to be Knowable, make a prediction for a candidate attribute that requires expert knowledge or special investigation. Later, validate the accuracy of that prediction using one of the statistical models in Section 3.4, Statistical Model Selection. If COSP is assumed to be Complex, attempt to identify a historical trend for a candidate attribute and validate the accuracy of that perception using one of the statistical models in Section 3.4, Statistical Model
Selection. If COSP is assumed to be Chaos, validate previous rejections of ABD1, ABD2,
ABD3, and ABD4.
Output – Evidence and Candidate Explanation(s)
Next – The next step depends upon the assumed COSP. If assumed COSP is Known, proceed to Section 3.2.6. If assumed COSP is Knowable, proceed to Section 3.2.7. If assumed COSP is Complex, proceed to Section 3.2.8. If assumed COSP is Chaos, proceed to Section 3.2.9.
3.2.6 Step 6 – Evaluate Hypothesis ABD2 & Interpret if COSP is Known
Overview – As a second test, if necessary, SEMDAM looks for statistically significant evidence of a prior prediction of cause and effect using the proposed attribute(s) (see Section 3.3, Attribute Selection) using statistical method appropriate to attributes (see Section 3.4, Statistical Model Selection). Since Disorder has been rejected, the list of candidate explanations has been updated.
Input – Statistically significant Evidence of a priori cause and effect from Step 5; and, Candidate Explanations: Known, Knowable, Complex or Chaos.
Process – Using the evidence provided and updated candidate explanations, evaluate hypothesis ABD2.
121
Given evidence (E => Evidence of a priori cause and effect) and
ABD2 candidate explanations: Known, Knowable, Complex or Chaos of E, (4)
infer the truth of that Known best explains E.
Output – If we fail to reject ABD2, infer that COSP is Known. Proceed to Step 10.
Next – If we reject ABD2, infer COSP is NOT Known. Assume COSP is Knowable and proceed to Section 3.2.5, Step 5 – Obtain Evidence for Cause and Effect Analysis.
3.2.7 Step 7 – Evaluate Hypothesis ABD3 & Interpret if COSP is Knowable
Overview – As a tertiary test, if necessary, SEMDAM looks for statistically significant evidence of a prior prediction of cause and effect requiring special investigation or expert knowledge using the proposed attribute(s) (see Section 3.3,
Attribute Selection) using statistical method appropriate to attributes (see Section 3.4,
Statistical Model Selection). Since Known has been rejected, the list of candidate explanations has been updated.
Input – Statistically significant Evidence of a priori cause and effect requiring special investigation or expert knowledge from Step 5; and, Candidate Explanations: Knowable,
Complex or Chaos.
Process – Using the evidence provided and updated candidate explanations, evaluate hypothesis ABD3.
Given evidence (E => Evidence of a priori cause and effect that
requires expert knowledge or special investigation) ABD3 (5) and candidate explanations: Knowable, Complex or Chaos of E,
infer the truth of that Knowable explains E.
122
Output – If we fail to reject ABD3, infer that COSP is Knowable. Proceed to Step 10.
Next – If we reject ABD3, infer COSP is NOT Knowable. Assume COSP is Complex and proceed to Step 3.
3.2.8 Step 8 – Evaluate Hypothesis ABD4 & Interpret if COSP is Complex
Overview – As a fourth test, if necessary, SEMDAM looks for statistically significant evidence of a posteriori perception of cause and effect using the proposed attribute(s)
(see Section 3.3, Attribute Selection) using statistical method appropriate to attributes
(see Section 3.4, Statistical Model Selection). Since Knowable has been rejected, the list of candidate explanations has been updated.
Input – Statistically significant Evidence of a posteriori cause and effect from Step 5; and, Candidate Explanations: Complex or Chaos.
Process – Using the evidence provided and updated candidate explanations, evaluate hypothesis ABD4.
Given evidence (E => Evidence of a posteriori cause and effect)
ABD4 and candidate explanations: Complex or Chaos of E, (6)
infer the truth of that Complex best explains E.
Output – If we fail to reject ABD4, infer that COSP is Complex. Proceed to Step 10.
Next – If we reject ABD4, infer COSP is NOT Complex. Assume COSP is Chaos and proceed to Section 3.2.5, Step 5 – Obtain Evidence for Cause and Effect Analysis.
123
3.2.9 Step 9 – Evaluate Hypothesis ABD5 & Interpret if COSP is Chaos
Overview – As a fifth and final test, if necessary, SEMDAM evaluates the consequent categorization of COSP as Chaos. Since Complex has been rejected, the list of candidate explanations has been updated.
Input – Absence of statistically significant evidence of a prior or a posteriori cause and effect from previous tests; and, Candidate Explanation: Chaos.
Process – Verify previous rejection of hypothesis tests ABD1, ABD2, ABD3, and
ABD4. Using the lack of evidence provided and updated candidate explanation, evaluate hypothesis ABD5.
Given evidence (E => No evidence of a prior or a posteriori
cause and effect) ABD5 (7) and candidate explanation: Chaos of E,
infer the truth of that Chaos best explains E.
Output – If we fail to reject ABD5, infer that COSP is Chaos. Because Chaos and
Disorder are similar in that neither would provide evidence of a priori or a posteriori cause and effect, Chaos, failure to reject ABD5, is dependent on previous rejection of
ABD1 verifying that COSP is not Disorder. Proceed to Section 3.2.10, Step 10 –
Recommend SEM based on Inferred COSP. Step 10.
Next – If we reject ABD5, infer COSP is NOT Chaos and realize that one of the previous hypothesis tests was incorrect. Assume COSP is Known and proceed to Section
3.2.5, Step 5 – Obtain Evidence for Cause and Effect Analysis.
124
3.2.10 Step 10 – Recommend SEM based on Inferred COSP
Overview – Applying the Cynefin sense-awareness framework with complexity domains associated with observation of cause and effect, SEMDAM provides a recommendation for a COSP appropriate SEM based on measuring statistically significant association for a priori prediction of system outcomes or a posteriori perception of system response.
Input – Inferred COSP.
Process – Using inferred COSP as input, lookup the associated complexity appropriate SEM from Table 2-11.
Output – SEMDAM provides a recommendation of a complexity appropriate SEM based on the analysis of evidence gathered. SEMDAM is based on abduction, also referred to as inference to the best explanation which is the style of reasoning commonly associated with medical doctors and detectives. Therefore, a complexity appropriate SEM is recommended subject to the understanding that “unlike deduction and induction, abduction can produce results that are incorrect within its formal system.” (New World
Encyclopedia, 2017)
Next – If SEMDAM was conducted as an aperiodic assessment or periodic situational model to reassess appropriateness of an in situ SEM where the inferred COSP aligns with the in situ SEM, the SEMDAM artifacts, including SEM recommendation, attributes used, and tests conducted, should be retained for potential future re-use.
If SEMDAM was conducted as an initial assessment and the inferred COSP was previously unknown or use of a SEM was not considered (i.e., Disorder), refer to Section
125
2.2, Definition of System Used, and begin the process of selecting and tailoring a SEM by defining the problem space.
If; however, SEMDAM was conducted as a periodic situational model to reassess appropriateness of an in situ SEM where the inferred COSP did not align with the in situ
SEM, PM/SEM needs to reassess continued use of the in situ SEM to avoid system miscategorization and potentially system failure. Snowden and Boone wrote:
Good leadership requires openness to change on an individual level. Truly adept leaders will know not only how to identify the context they’re working in at any given time but also how to change their behavior and their decisions to match that context. They also prepare their organization to understand the different contexts and the conditions for transition between them.
In the complex environment of the current business world, leaders often will be called upon to act against their instincts. They will need to know when to share power and when to wield it alone, when to look to the wisdom of the group, and when to take their own counsel. A deep understanding of context, the ability to embrace complexity and paradox, and a willingness to flexibly change leadership style will be required for leaders who want to make things happen in a time of increasing uncertainty. (Snowden & Boone, A Leader's Framework for Decision Making, 2007)
Selection of a COSP appropriate SEM when the inferred COSP did not align with the in situ SEM, does not automatically imply that a SEM capable of addressing more complexity is required. The activity of selecting attributes and analysis models is designed to increase knowledge of the COSP and the actual problem space. Not unlike a medical diagnosis, SEMDAM may first exclude several bad options before narrowing down the recommendation.
3.3 Attribute Selection Method
This section describes the requirements that SEMDAM places on attributes: valuable, nontrivial, and measurable; describes attribute data types: categorical and continuous; and describes attribute selection for both Design of Experiments (DoE) and observational studies.
126
3.3.1 Valuable, Nontrivial, and Measurable Data
SEMDAM is dependent on identification of attributes that may be used to predict or perceive patterns in system output based on valuable, nontrivial, and measurable data.
Regarding measurable data, Roedler and Jones wrote: “There are three key measurement concepts that form the basic building blocks for successful measurement application. They are:
Measurement is a consistent but flexible process that must be tailored to the unique information needs and characteristics of a particular project or organization. These information needs usually change during the life cycle as the environment changes, milestones are accomplished, performance parameters are achieved, risks are treated, etc. Changing information needs drive changes to the measures.
Decision makers must understand what is being measured. Key decision makers, including both technical and business managers, must be able to connect “What is Being Measured” to “What they need to know.” Measurement must deliver value- added objective results that can be trusted on the day-to-day issues that these managers face.
Measurement must be used to be effective. The measurement program must play a role in helping decision makers understand project and organization issues and to evaluate and make key trade-offs to optimize overall performance.
These three basic measurement concepts appear to be common sense, but are often ignored. They need to be ingrained in the project and organization to effectively apply measurement. (Roedler & Jones, 2005, p. 22)
3.3.2 Attribute Data Types
The selection of attributes for analysis impacts the type of statistical modelling available. Selecting attributes requires analysis of both inputs (X) and outputs (Y). Inputs and outputs may be either categorical or continuous. Gygi, Covey, DeCarlo, and
Williams described the two data types – categorical and continuous – with descriptions and examples described in Table 3-3. The type of data impacts selection of appropriate stastical analysis methods which are described in Section 3.4.
127
Table 3-3: Characterization of Data Types Data Type Description Examples Data observations fall into discrete, Eye color: brown, blue, Category named value categories green Location: Factory 1,
Factory 2, Factory 3 No mathematical operations can be Inspection results:
performed on the raw data pass, fail Size: large, medium,
small Fit check: go, no-go Questionnaire
response: yes, no You can count the number of Attendance: present,
occurrences you see of each category absent Employee: Fred,
Suzanne, Holly Processing: Treatment
A, Treatment B Data observations can take on Bank account balance: Continuous numerical value and aren’t confined to dollars nominal categories Length: meters Time: seconds Electric Current: amps (Gygi, Covey, DeCarlo, & Williams, 2012, p. 114)
3.3.3 Identification of Key Decision Attributes by Design of Experiment
An experiment purposely sets and controls input values by control and/or modification of the process or system being studied. Because variables are controlled in the design of experiments (DoE) and test runs can be randomly ordered, statistically significant results can lay claim to causation. Montgomery wrote:
In general, experiments are used to study the performance of processes and systems. The process or system can be represented by a general model of a process or system, Figure 3-10. We can usually visualize the process as a combination of operations, machines, methods, people, and other resources that transforms some input (often a material) into an output that has one or more observable response variables. Some of
128
the process variables and material properties x1 through xo are controllable, whereas other variables z1 through zq are uncontrollable. (Montgomery, 2013, p. 3)
Figure 3-10: General model of a process or system (Montgomery, 2013, p. 3)
If DoE is possible, there are multiple DoE strategies available to identify significant attributes:
. Best-Guess Approach – This approach, which is frequently used in practice by
scientists and engineering involves selecting an arbitrary combination of factors
and see what happens. Disadvantages of this approach include: it could take a
long time or be impossible to schedule; and, there is no guarantee that the best
solution has been achieved or ever will be achieved;
. One-Factor-at-a-Time (OFAT) Approach – This approach involves selecting a
starting point or baseline set of levels, for each factor, and then successively vary
each factor over its range with the other factors held constant at the baseline level.
The main disadvantage of this approach is that it fails to consider any possible
interaction between the factors; and,
129
. Factorial or Fractional Factorial Approach – In this approach, factors are varied
together instead of one at a time which makes the most efficient use of the
experiment data.
3.3.4 Identification of Key Decision Attributes by Observational Study
An observational study measures outputs, which can be response or dependent variables, of interest when it is impossible or impactable to control, manipulate or influence the system, including inputs or other factors, under study. The observer acts as an outside observer, recording data and/or events as they happen, or happened, in order to gain understanding from careful review of the environment. Finding statistically significant results during an observational study, allows the claim of correlation and association, not causation since observation studies don’t or can’t control any variables.
Bharathy and McShane wrote “a causal link is ascribed between two variables when the modeler believes that what happens in an independent variable was cause for some consequence in a dependent variable.” They wrote, “Systems dynamics is a causal modelling approach that evolved out work on feedback control systems owing to systems dynamics’ ability to handle complex inter-relationships, nonlinearity, and feedback loop structures and time delays” adding, “a main tenet of causal modelling is that modelling each component individually and aggregating the components is not enough to determine the behaviour of a system.” (Bharathy & McShane, 2014)
Gygi, Covey, DeCarlo, and Williams wrote “reduction of a large collection of potential factors down to a smaller area of focus is called data mining” adding “be certain choices are guided by the data rather than by opinion or guesses.” (Gygi, Covey,
DeCarlo, & Williams, 2012, p. 209) Kantardzic wrote:
130
The need to understand large, complex, information-rich data sets is common to virtually all fields of business, science, and engineering. Data mining is an iterative process within which progress is defined by discovery, through either automatic or manual methods. Data mining is most useful in an exploratory analysis scenario in which there are no predetermined notions about what will constitute an “interesting” outcome. Data mining is the search for new, valuable, and nontrivial information in large volumes of data. It is a cooperative effort of humans and computers.
In practice, the two primary goals of data mining tend to be prediction and description. Prediction involves using some variables or fields in the data set to predict unknown of future values of other variables of interest. Description, on the other hand, focuses on finding patterns describing the data that can be interpreted by humans. System identification is not a one-pass process: both structure and parameter identification need to be done repeatedly until a satisfactory model is found. (Kantardzic, 2011, p. 2)
Gygi, Covey, DeCarlo, and Williams describe a progressive and iterative approach to identify critical system attributes that begins with observational studies and leads to DoE shown in Figure 3-11. Regarding screening, characterization, and optimzation experiments, Gygi, Covey, DeCarlo, and Williams wrote:
Screening experiments: The whole point of this stage is to quickly verify which factors have a significant effect on the output. When you start investigating a process or system, your experiments are designed to handle a large number of factors or variables because you identify all the possible Xs that may be influencing the output Y. But not all of those inputs affect the output, so screen them out.
Characterizing experiments: When you’ve screened out the unimportant variables, your experiments focus on characterizing and quantifying the effect of the remaining critical few inputs. These characterization experiments reveal what form and what magnitude the critical factors take in the Y = f(X) + ε equation for your process or system.
Optimization experiments: After characterizing your process or system, the final step is to conduct optimization experiments to find the best settings of the Xs to meet your Y goal. Your goal may be to maximize or minimize the value of the output or to hit a certain target level. More often, your goal is simply to minimize the amount of variation in the output Y. (Gygi, Covey, DeCarlo, & Williams, 2012, p. 265)
131
Figure 3-11: SEMDAM Attribute Selection is Based on Six Sigma’s Progressive and Iterative Approach to Identify Critical Outcomes (Gygi, Covey, DeCarlo, & Williams, 2012, p. 265) {Observational Study & DoE added}
3.3.5 Describing Trends in Attribute Data
SEMDAM is structured to work across a broad range of COSPs including Complex or
Chaos where theory posits that is not possible to optimize system response. Ackoff wrote of a useful concept to consider when optimization is not possible:
‘Satisficing’ is a remarkably useful term that was coined by Herbert A. Simon to designate efforts to attain some level of satisfaction, but not necessarily to exceed it. To satisfice is to do ‘well enough’, but not necessarily ‘as well as possible’. The level of attainment that defines ‘satisfaction’ is one that the decision maker is willing to settle for.
The satisficing approach to planning is usually defended with the hard-to-refute argument that it is better to produce a feasible plan that is not optimal than an optimal plan that is not feasible. (Ackoff, 1970)
3.4 Statistical Model Selection
The purpose of this section is to describe the use of statistical models for use to evaluate the attributes selected above to verify the prediction or perception as statistically significant using the analysis of statistical methods shown in Figure 3-12.
132
Figure 3-12: Choosing an Analysis: Attribution Selection Impacts Statistical Method Selection (Minitab, 2010)
3.4.1 Linear or Logistic Regression
In statistical modeling, regression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (‘predictors’ or ‘factors’). Regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed. Simple linear regression is a statistical method that allows us to summarize and study relationships between two continuous (quantitative) variables:
One variable, denoted x, is regarded as the predictor, explanatory, or
independent variable; and,
The other variable, denoted y, is regarded as the response, outcome, or
dependent variable.
133
In a simple linear regression model, a single response measurement Y is related to a single predictor (covariate, regressor) X for each observation. The critical assumption of the model is that the conditional mean function is linear:
E(Y |X) = α + βX (8)
Simple linear regression gets its adjective "simple," because it concerns the study of only one predictor variable. In most problems, more than one predictor variable will be available. This leads to the following “multiple regression” mean function:
E(Y |X) = α + β1X1 + · · · + βpXp (9)
where α is called the intercept and the βj are called slopes or coefficients.
3.4.2 Analysis of Variance (ANOVA)
Analysis of Variance (ANOVA) is a collection of statistical models used to analyze the differences among group means and their associated procedures (such as "variation" among and between groups). The analysis of variance may be used as an exploratory tool to explain observations. ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes the t-test to more than two groups.
ANOVAs are useful for comparing (testing) three or more means (groups or variables) for statistical significance. It is conceptually similar to multiple two-sample t-tests.
Nordstokke and Zumbo wrote “Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances.” (Nordstokke & Zumbo, 2010)
134
3.4.3 Two-Sample t-Test
A two-sample t-test determines if the population means for two independent groups differ significantly or if the difference is due instead to random chance. Minitab wrote:
Use a two-sample t-test with continuous data from two independent random samples. Samples are independent if observations from one sample are not related to the observations from the other sample. The test also assumes that the data come from normally distributed populations. However, it is fairly robust to violations of this assumption when the size of both samples is 30 or more. (Minitab, 2010)
3.4.4 Data Qualification Methods
The following data qualification analysis steps: normality, Johnson Transformation,
Grubbs’ test for outliers, test for equal variances, and/or Chi-Squared test for independence may be used to provide a statistically significant basis for rejecting or failing to reject hypotheses.
3.4.4.1 Test for Normal Distribution
The Anderson-Darling test for normality is based on the following null and alternate hypotheses:
H(N): Data follows a normal distribution
H(N)A: Data does not follow a normal distribution
The Anderson-Darling test statistic, 퐴 is defined as:
퐴 = −푛 − 푆 (10)
Where: