<<

The Application of to Development: A Case Study

Robert L. Sweeney, Jeffrey P. Hamman, and Steven M. Biemer

he U.S. Navy’s Assessment Division (N81) integrates and prioritizes war- fighting capability within resource constraints by using a joint campaign model to represent “what it takes to win” in the complex arena of multiservice regional conflict. N81 commissioned an assessment in the spring of 2006 to deter- mine the feasibility and affordability of adding a maritime capability to the Synthetic Theater Model (STORM) to make it an acceptable campaign model for the U.S. Navy staff. The result of this assessment was a partnership between the U.S. Navy’s N81 and the U.S. Air Force Air Staff’s Studies and Analyses Director- ate (A9), under the name “STORM+.” Replacing a legacy campaign model has broad impact on future investment decisions and can attract a wide range of stakeholders with an even wider range of . This article describes how an APL team partnered with both the sponsor and the developer to implement concepts to ensure a successful replacement of the existing deterministic U.S. Navy campaign model with a stochastic model created by adding a maritime war- fare capability to STORM. The results indicate systems engineering can be successfully applied to a large, complex effort as long as the cultures of both the sponsor and the developer are appreciated and accommodated.

THE PURPOSE OF STORM+ The landscape of model development is littered with and an acceptably short run time. Such an outcome is the abandoned remains of needed tools. They generally the result of too many stakeholders with dis- lie discarded, rather than in use, often because they parate analysis problems and varying granularity being could not achieve a useful set of capabilities constrained unwilling to compromise for a greater “good enough.” sufficiently to maintain a manageable data input load They hold out for a sum of all requirements, which often

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 4 (2011) 327­ R. L. SWEENEY, J. P. HAMMAN, and S. M. BIEMER results in a terminated program that fails to meet any Larger Campaign requirements at all. Systems engineering provides a dis- scope ciplined, structured process to keep the effort focused on STORM this greater “good enough”—the minimal set of require- and ITEM ments necessary and sufficient to meet a well-defined, Mission feasible need.1–4 This is the story of the challenges in getting participants to accept systems engineering even Engagement when the sponsor has decreed its use. The story describes a collaboration of different professional cultures that Systems engineering share the community of military modeling and simula- More Physics and phenomenology tion (M&S) but, like most good stories, leads to a happy resolution ending that more than vindicates the trials and tribula- tions, not to mention the investment of resources. Figure 1. Modeling pyramid, showing the location of the U.S. Air The sponsor for the STORM+ project was the Force’s STORM and the U.S. Navy’s ITEM in the modeling hierarchy. U.S. Navy’s Assessment Division in the Office of the Chief of Naval Operations (OPNAV), known by their office code as N81. In the world of U.S. Navy resource When N81 began looking for a stochastic model to requirements, N81 is OPNAV’s “honest broker” and replace ITEM, the U.S. Air Force’s Synthetic Theater integrator. It produces capability analyses for both war- Operations Research Model (STORM) was an attrac- fighting and warfighting support, integrating and pri- tive choice. First implemented in 2004, STORM is a oritizing capabilities within resource constraints and stochastic, discrete-event, data-driven writ- balancing inputs from strong constituencies to recom- ten in the C++ programming language. STORM is mend a broad, affordable program to ensure that the based on an architecture called the Common Analytic U.S. Navy can meet its role as defined by the National Simulation Architecture (CASA), which is designed to Military Strategy.5 reduce development time and life-cycle costs for analytic N81 analyses take advantage of M&S throughout , while minimizing dependencies between the modeling hierarchy pyramid depicted in Fig. 1. The software modules. STORM has an active and ongoing ultimate determination is how the program contrib- development effort managed by the U.S. Air Force Air utes to the joint (all armed services) campaign arena, Staff’s Studies and Analyses Directorate (A9) as well as evaluating “what it takes to win” and the “so what?” of a growing users’ community that includes other U.S. any capability analysis or analysis of alternatives. How services, the Office of the Secretary of Defense, the a or concept performs in a campaign with the staff of the Chairman of the Joint Chiefs of Staff, and scope of a Desert Storm or Iraqi Freedom with multiple, international participation. It brings a variety of highly competing missions and capabilities from all of the U.S. desirable attributes, such as an open architecture, the armed services and against the most capable projected use of industry standards, low program and ownership adversary is often the final discriminator for senior deci- costs, and high adaptability because of its data-driven sion makers. format. N81 commissioned an assessment in the spring Since the early 1990s, N81’s campaign model has of 2006 to determine the feasibility and affordability been the Integrated Theater Engagement Model (ITEM), of adding a maritime capability to STORM to make it consisting of air, land, and naval warfare modules that an acceptable campaign model for the U.S. Navy staff. permit realistic representations of capabilities from all The result of this assessment was a partnership between armed services in a common computer environment by N81 and A9 under the project name “STORM+.” Tap- using a deterministic method to represent uncertainty ping into the convergence of a campaign model that in outcomes. This deterministic approach can allow had been embraced by the U.S. Air Force, was used greater fidelity of object attributes than a stochastic within the Office of the Secretary of Defense, and had approach, which requires a large number of model runs captured the interest of the U.S. Marine Corps would and therefore longer run times to generate a distribution bring broad understanding and credibility to future of possible outcomes. With the rapid increase of com- U.S. Navy studies. puting power and speed over the last 10 years, run time Campaign models must be capable of joint force struc- for stochastic models has become less of a consideration. ture analysis, strategy assessments, and operational - Stochastic models are gaining broad acceptance for their ning while providing metrics useful to decision makers. ability to provide the analyst a solution space rather Given the range of military activities that must be mod- than a point solution based on an assumed probability. eled, the diversity of mathematical contained Identifying an area of uncertainty around outcomes within, and the interrelated trade-offs of attributes such increases the credibility of analysis and gives decision as run speed, granularity, and ease of use, there can be makers more contexts with which to make a decision. little doubt that a campaign model is a .

328 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 4 (2011) APPLICATION OF SYSTEMS ENGINEERING TO SOFTWARE DEVELOPMENT

The development of a complex system requires systems ativity, innovation, and rapid response are hallmarks of engineering throughout its life cycle, from requirements modern ; this was the case with the generation through functional definition, development, STORM+ program. integration and testing, deployment, and operations The STORM+ developers used a form of agile soft- and support. Each user (modeler) and stakeholder (user ware development, an iterative life cycle model that of the model’s output) is an advocate for certain model quickly produces prototypes that the user and developer capabilities and attributes. Some capabilities and attri- can evaluate to refine requirements and design. It is butes are true requirements to achieve the overall objec- especially well suited to small- to medium-sized tive, but others are only “nice-to-haves” that, although for which requirements are not firmly defined and where desirable to some or even all of the user/stakeholder the sponsor is willing to work closely with the developer community, bring unnecessary to performance, to achieve a successful product. The agile methodology cost, and schedule. depends on this close sponsor–developer engagement. The U.S. Navy recognized that systems - As defined by its proponents, the agile methodology ing could guide the development process, keeping it is based on the following postulates: headed efficiently toward the objective while coordinat- • Requirements (in many projects) are not wholly ing the various disciplines represented by the stakehold- predictable and will change during the develop- ers, including M&S code writers, analysts, and decision ment period. A corollary is that sponsor priorities makers. Inherent to systems engineering also would be are likely to change during the same period. a testing plan to verify that requirements were met and, • Design and construction should be integrated most importantly, to continually inform the sponsor on because the validity of the design can seldom be the level of risk and to recommend courses of action judged before the implementation is tested. when the risk became too high. Embarking on a mul- • Analysis, design, construction, and testing are not tiyear, multimillion-dollar model-development effort to predictable and cannot be planned with adequate add maritime capability to STORM without a systems levels of precision. engineer to monitor, assess, and report would have invited, at a minimum, schedule delays and cost over- Agile development relies heavily on the software runs and ultimately could have led to a final product development team conducting simultaneous activities. that did not meet the need originally identified as the Formal and design are not sepa- reason for replacing the legacy model. rate steps; they are incorporated in the coding and test- ing of software. In this approach, quality and robustness are evolved attributes of the product. Thus, the itera- SYSTEMS ENGINEERING PLAN tions are to be built upon, rather than thrown away (see chapters 20–24 in Ref. 4). Systems engineering is built on the principle of main- Although agile development works well, it is difficult taining a total system perspective throughout the devel- to reconcile with traditional systems engineering meth- opment of a complex system, resolving design decisions ods. Furthermore, the STORM development history has by using the highest context available. This principle used a series of spiral releases, scheduled approximately requires the systems engineer to continually focus on every 6 months. It was important that introducing sys- five activities throughout the development life cycle: tems engineering into the development of a maritime 1. Formulating and refining operational, functional, capability not break this cycle. and performance requirements To conform to this 6-month periodicity, the 2. Identifying and decomposing the system’s function- STORM+ development effort was initially divided ality into a set of separate 6-month periods, with five spiral releases (numbered 1 through 5) that culminated in 3. Implementing that functionality into a feasible, a formal release of STORM v2.0. Each spiral would useful product include additional functionality over the previous one; 4. Verifying the system’s requirements, functionality, however, the level of functionality would not be linear. and implementation In fact, the first spiral would not involve a software release at all but rather a set of design documents focus- 5. Managing inherent operational, technical, and pro- ing on the model infrastructure necessary to implement grammatic maritime capability. The first four spiral releases would Whether in the first or the final stage of develop- be used to measure and evaluate the progress against ment, the performance of these five activities drives the maritime requirements. Before the spiral develop- design decisions and leads to a structured approach. ment, however, two early phases would be completed: However, applying a structured approach to analysis and conceptual design. Figure 2 development has always presented a challenge. Cre- depicts this schedule.

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 4 (2011) 329­ R. L. SWEENEY, J. P. HAMMAN, and S. M. BIEMER

July January July January July January July January July 2006 2007 2007 2008 2008 2009 2009 2010 2010 Feasibility Requirements Conceptual Spiral 1 Spiral 2 Spiral 3 Spiral 4 Spiral 5 analysis analysis design

STORM STORM+ STORM STORM+ STORM v1.6 Navy v1.7 Navy v2.0 Interim Alpha Version Version

Figure 2. The schedule for the STORM+ development effort.

The version of STORM available to users at the Functional Analysis beginning of this effort was v1.6. Two additional ver- Although the requirements define the assets and pro- sions would become available to general users: v1.7, cesses (the “what”) in the model, they do not define how having limited maritime capability, and v2.0, having full they are implemented (the “how”). Functional analysis maritime capability. focused on the specifics of the naval asset interactions, The plan that evolved had to support agile develop- processes, and information architecture necessary to ment used by the developer while maintaining the criti- implement the requirements. cal aspects of systems engineering. Figure 3 depicts the process that integrates the two. Overlaid in maroon on Fig. 3 are the names of the eight integrated prod- Design, Development, and Unit Testing uct teams (IPTs) listed next to the activities for which The developer followed their agile approach for they were responsible. As indicated, the Management, design and development. Before each spiral develop- Systems Engineering, , and Data ment effort, a general road map was published for Integrity IPTs had responsibilities throughout the pro- review and feedback. Once the road maps were under- cess. Although the IPTs had distinct, clearly defined stood and agreed upon, a series of module design responsibilities, their membership was drawn from all documents were developed as the design and develop- participating organizations, with some people serving ment progressed. These documents described the gen- on more than one IPT. This mixed membership, along eral design for each portion of the model addressed in with biweekly teleconferences that included representa- the spiral. Finally, code was engineered, followed by tion from all IPTs, kept “stovepipes” from forming that unit testing on each module, which was performed by could lead to inefficiencies, miscommunications, and the developer. other problems.

• Management IPT Requirements Analysis Capabilities STORM+ Requirements ITEM requirements IPT • Systems Engineering IPT Requirements analysis is a critical analysis • Risk Management IPT Data Integrity IPT component of systems engineering STORM+ requirements • and was at the heart of the STORM+ document development effort. The require- STORM+ Functional Whiteboard Analysis IPT ments focused on what naval assets sessions functional (along with their attributes) and what analysis processes would be in STORM+. Ini- STORM+ Each spiral tially identifying requirements was Development IPT not the challenging activity, because Spiral road maps STORM+ STORM+ they came from the capabilities of development design Module design the current ITEM. The challenge documents and unit testing was scoping this initial set of ITEM Code capabilities into a manageable and release consistent set of requirements. The goal was to establish a stable set of Feedback STORM+ Test results CV&T requirements early. Constrained by CV&T IPT an ambitious schedule and a budget, the requirements were not allowed to creep beyond the goal of “ITEM-like” Figure 3. The STORM+ systems engineering process, which incorporated agile devel- capabilities, but they did evolve to opment while maintaining critical aspects of systems engineering. Names of the eight enhance clarity. IPTs are shown in maroon next to the activities for which they were responsible.

330 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 4 (2011) APPLICATION OF SYSTEMS ENGINEERING TO SOFTWARE DEVELOPMENT

Concurrent Verification and Testing Initial STORM+ user needs ITEM-based requirements Concurrent verification and testing (CV&T) involved analysis v1.1 the planning, execution, and reporting of the STORM+ 871 542 spiral testing. Activities that were performed included (i) verifying the mapping of requirements to conceptual STORM+ model to development products; (ii) assessing developer requirements Refinement Already v1.2 unit testing; (iii) verifying and testing repre- New to implemented analysis sent ation, i nitia li z ation d at a, a nd h a rdwa re/sof t wa re i nte- STORM in STORM to 527 some degree gration; and (iv) documenting and reporting activities. 231 As the lead systems engineer, APL was responsible 296 for ensuring that this process was followed as well as identifying and mitigating risks throughout the pro- gram. Accomplishing this responsibility meant signifi- Figure 4. The requirements analysis process. cant involvement in three of the four primary activities depicted in Fig. 3—requirements analysis, functional If a requirement was represented in ITEM, it was analysis, and CV&T—in addition to risk management. designated as “In-ITEM.” If it was not, the requirement The next section describes the actions performed was designated as “Out-ITEM.” If the requirement was within these three activities and their outcomes that led already represented in STORM, even if it was not com- to the successful release of STORM v2.0. At the end of pletely met, it was designated as “In-STORM.” Finally, if the section is a discussion of risk management. the requirement was not represented in STORM, it was designed as “Out-STORM.” Thus, the requirements were divided into four SYSTEMS ENGINEERING CONTRIBUTIONS categories: Requirements Development and Analysis 1. In-ITEM/In-STORM The STORM+ maritime requirements development 2. In-ITEM/Out-STORM process began during an initial assessment of the abil- 3. Out-ITEM/In-STORM ity of the U.S. Air Force’s STORM to support OPNAV campaign analysis. The team of subject-matter experts 4. Out-ITEM/Out-STORM (SMEs) that N81 convened in May 2006 identified These categories were used to prioritize the STORM+ broad maritime capabilities that would need to be added requirements. Table 1 shows the resulting matrix for the to STORM to achieve an analysis capability compara- STORM+ requirements and their priorities. ble to ITEM. “ITEM-like” capabilities would be a com- This requirements analysis process produced 542 bination of those demonstrated in STORM plus those STORM+ requirements. After further assessment and implemented in a STORM modification. review by OPNAV and the U.S. Air Force, the number The Requirements IPT was responsible for identify- of STORM+ requirements was reduced to 527. ing, analyzing, and articulating the model requirements. The final product of this process was a STORM+ They surveyed ITEM users and campaign analysts for requirements document. Both the sponsor (N81) and model requirements necessary to provide maritime capa- the model manager (A9) signed off on the STORM+ bilities in STORM+. Initially, more than 871 user needs requirements document—N81 from the perspective of were identified and submitted for requirements analysis. what was needed and A9 from the perspective of feasi- Proposed requirements went through several systems bility. This final set of requirements was split into two engineering process steps (depicted in Fig. 4) before categories: The first category represented 231 require- being accepted as valid STORM+ requirements. First, ments that the existing STORM could not support and user needs were categorized into maritime capability cat- needed to be added as “unique” development efforts; egories (e.g., anti-submarine warfare, sea-based air and the second category represented 296 requirements that missile defense, etc.). Second, a requirements review was completed that identified Table 1. Matrix of STORM+ requirements and their priorities. and deleted duplicate and nonspecific user needs. And STORM Capabilities third, each requirement ITEM was assessed to determine Capabilities In-STORM Out-STORM whether it was represented in In-ITEM STORM+ requirement: priority 1 STORM+ requirement: priority 1 ITEM and whether it could be represented in STORM. Out-ITEM STORM+ requirement: priority 2 Not a STORM+ requirement; deferred

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 4 (2011) 331­ R. L. SWEENEY, J. P. HAMMAN, and S. M. BIEMER

STORM could support to some degree (those designated member was assigned to develop a conceptual model In-STORM). The 231 unique STORM+ requirements description based on the discussions. These conceptual were provided to the STORM+ Functional Analysis model descriptions were combined into an integrated IPT to develop a STORM+ conceptual model for use by STORM+ conceptual model that described an agreed- the STORM+ developer. The In-STORM requirements upon method of implementing maritime requirements were provided to the CV&T IPT to determine whether into the existing STORM software. Figures 5 and 6 the STORM implementation was sufficient to fully meet show examples of conceptual model products on com- the requirement. mand and control (C2) that were developed during a whiteboard session. Functional Analysis Development Although many of the conceptual model sections bore titles related to STORM+ requirement capabilities It is within the functional analysis development that (anti-air warfare, anti-submarine warfare, etc.), there the first integration of systems engineering and agile were also sections that described model functionality, development occurred. A set of requirements is not suf- such as “Maritime Motion,” “Maritime Sensing,” and ficient to start software development. The Functional “Maritime Prosecution, Engagement, and Damage.” Analysis IPT was responsible for developing and main- Each section of the conceptual model referenced the taining a STORM+ conceptual model. The conceptual unique and, if needed, the In-STORM requirements model was a document that described how the unique STORM+ requirements should be implemented, from for traceability. a real-world operator’s perspective rather than from a The integration of developers with IPT SMEs in the modeler’s perspective. The conceptual model’s purpose whiteboard sessions was beneficial in ensuring that the was to functionality described did not adversely impact the current STORM architecture. Where possible, existing • Document traceability between requirements and model architecture and functionality for ground and air desired capabilities forces was reused for maritime functionality. If the exist- • Provide a basis for preliminary design and other ing functionality did not support a required maritime planning activities by the developers function, then the whiteboard session members devel- oped and mutually agreed on a solution. • Support the associated CV&T activity in relating development plans to requirements Development of the conceptual model was a col- Concurrent Verification and Testing laboration between analysts who currently perform CV&T was divided into three phases. Phase I was campaign analysis for the U.S. Navy and the developers responsible for testing the naval capabilities that were of STORM. This collaborative approach was designed already in the existing STORM. Phase II was responsible to accommodate a compressed development timeline. for testing functionality within the U.S. Navy Interim It also ensured that the resulting STORM+ model was release (spirals 1 and 2). And Phase III was responsible responsive to U.S. Navy analysis needs while retaining for testing the U.S. Navy Alpha version (spirals 3 and 4). its architectural integrity, the coherence of its meth- Spiral 5 was released after maritime capability was fully odology, and the analysis capability on which users of integrated into STORM and therefore was tested under STORM had come to rely. The conceptual model also the existing procedures for STORM. Each testing phase helped improve the consistency of the analysts’ interpre- followed a similar process, shown in Fig. 7. After each tation of model capability before the developer initiated spiral test, results were fed back, in case revisions to code development. The process for creating the conceptual model was to con- vene multiple whiteboard ses- sions comprising selected SMEs from the Functional Analysis, Requirements, and CV&T IPTs as well as the STORM+ devel- oper. These whiteboard sessions were designed to allow the SMEs to interact freely with developers while discussing model function- ality to meet a given set of require- ments. After each whiteboard session, a Functional Analysis IPT Figure 5. Sample of a C2 product description in the STORM+ conceptual model.

332 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 4 (2011) APPLICATION OF SYSTEMS ENGINEERING TO SOFTWARE DEVELOPMENT

NavalCommand Side Side

Subordinate to Naval NavalUnit command AirCommand

Subordinate Subordinate to to A group of Naval-hosted air ships operating Naval NavalAsset Air units (and the air units’ together, executing unit command commands) are defined a conditional C2 plan with their ship Assigned Subordinate to to Air units operate either aircraft or SSMs A particular ship Naval Air asset unit and receive ATO from their air command Is Operates as

A class of ship Type of Type of naval asset air asset

TypeNavalAsset TypeAirAsset

Figure 6. Sample implementation of the C2 product of Fig. 5 in the conceptual model. The sample shows naval unit organizational relationships. ATO, air tasking order; SSMs, surface-to-surface missiles. the development effort and the conceptual design (the test cases that grouped and sufficiently investigated Functional Analysis IPT) were needed. each STORM+ requirement and conceptual model Testers were selected from organizations that support functionality. OPNAV N81 analysis and have extensive experience in Each described not only the requirements campaign modeling. They were challenged to develop to be tested but also a scenario, including technical,

Initial STORM+ requirements were: STORM+ simulation • Partitioned into modules test cases and results Conceptual Model • Categorized as existing (“In-STORM”) for STORM capability or to-be-developed Runs for Maritime (“Unique”) capability record are Functionality conducted

Version 1.2.101 STORM+ phase testing Reviewed test cases 1 July 2008 • Test developed capabilities Include assessment of: • Acceptance criteria for • Traceability to requirements maritime warfare capability • Acceptance criteria • Include previous testing, • Scenario definition deferred and partial • How well test case addresses requirements

SMEs review Test cases test cases Internal module drafted for prioritization required test Draft test cases 1 Metric elements Included in template: Module requirements prioritized on 2 Function • Mapping to requirements the basis of relative contribution of the • Test description required capability to simulation 3 C2 and cross-model interaction implementation and use 4 Asset behavior • Acceptability criteria 5 Asset attribute • Scenario definition • Test case objectives 6 Other • Test case steps

Figure 7. Defining and scoping the spiral testing process.

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 4 (2011) 333­ R. L. SWEENEY, J. P. HAMMAN, and S. M. BIEMER operational, and environmental data as well as accept- • Step 5: Run for record. Official test runs were con- ability criteria. The Data Integrity IPT was responsible ducted, and the results were documented. for identifying data transformation algorithms needed to convert existing scenario within the • Step 6: Develop test results matrix. A results U.S. Navy to STORM+ data files and for developing matrix summarizing the results of the test cases was specific test databases that met the needs of individual produced. test cases. The test cases were subsequently reviewed and • Step 7: Archive test cases. Completed test cases accepted by OPNAV SMEs before use. were archived to capture all of the test case descrip- Each test case explicitly defined acceptance criteria tions, associated input data, reviewer comments that established the measures against which to judge and associated responses, test case results, test case the appropriateness of a simulation for an intended use. traceability matrices, and test case results matrices. These criteria were developed by testers and reviewed Testers defined the test procedures for each test by the Requirements and Functional Analysis IPTs as case and provided detailed documentation of the needed. Some fundamental properties of good accept- steps that were taken to implement the test (e.g., ability criteria that were applied included the following: which input files were modified, which output files • Criteria should map to the documented require- were reviewed, etc.). This level of detail served two ments. purposes. First, it provided testers with an under- • Criteria should be quantitative when practical but standing of how STORM is structured and what is may be supplemented by qualitative values provided required to run the simulation. Second, it provided by the user and SMEs. a basis for follow-on regression testing. • Criteria should reflect the planned uses of the simu- lation. Reporting Test Case Results • Criteria should support the assessment of statistical The results of a test case provided evidence of how well confidence in simulation results for intended uses. the required capability was implemented in STORM+. Test results were placed into one of five categories: Test Case Development Process 1. Requirement met. The simulation fully supports the required capability and meets all the acceptabil- Each test suite followed a similar process for develop- ity criteria. ing the underlying test cases. This process consisted of the following steps. 2. Requirement partially met. The simulation sup- ports some elements of the required capability but • Step 1: Define test cases. A detailed description of does not provide the complete functionality and/or each test case was developed and documented with does not meet all the acceptability criteria. a standard template. Each test case was mapped to one or more requirements and applicable concep- 3. Requirement not met. The test results indicate tual model sections and contained a description of that the simulation does not provide the required the test scenario, the testing procedure, the tester- capability and either there is no trace to future defined expected result, the test acceptability cri- development or the results do not meet any of the teria, and the overall test results. All test cases acceptability criteria. required model runs; however, some portions of a 4. Testing deferred. The required capability does not test case, specifically those that related to asset attri- currently exist in the simulation but is planned in butes, could be verified by inspection of STORM+ future STORM+ development. input files and output reports. 5. Not tested. Requirement not selected for testing on • Step 2: Review of test cases by Functional Analy- the basis of a risk assessment (probability of a prob- sis and Requirements IPTs. Initial test case descrip- lem and overall impact to the model). tions were reviewed by members of the Functional Analysis and Requirements IPTs to ensure the test- Test Results ing procedures met the intent of the STORM+ Figure 8 shows the three test phases and the cate- requirement or requirements. gorization of requirements through the phases. Phase I • Step 3: Modify test cases as needed. Test cases testing focused exclusively on requirements that were at were modified as needed based on the review results. least partially implemented in STORM. Each require- ment was tested within one or more test cases and • Step 4: Refine/debug test cases. Test cases were placed into one of the five categories defined above. The implemented and prepared for model runs. last category, not tested, was not used in Phase I.

334 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 4 (2011) APPLICATION OF SYSTEMS ENGINEERING TO SOFTWARE DEVELOPMENT

tation of all requirements categorized as either partially STORM+ requirements met or not met. In addition to accepting the results of In- New to Phase III testing, the sponsor dropped one requirement STORM STORM as being beyond the original scope, characterized others 296 231 as coding errors to be fixed immediately before release of STORM v2.0, and designated one requirement for future implementation and/or correction in subsequent Phase I Phase II Phase III releases of the simulation. Met Met Met Risk Management Finally, the Risk Management IPT was responsible for identifying, tracking, and reporting on all risks Partially met throughout the project. Risk management consisted of Partially met Deferred identifying, planning, mitigating, and retiring risks to Deferred the program. Risks were handled by a combination of methods. Any team member could identify and propose Partially met a risk. The Risk Management and Systems Engineer-

Transfer to next phase Transfer to next phase Transfer Not tested Not met Not met ing IPTs reviewed all proposed risks, and if they were accepted, a risk manager from outside of the Risk Man- Figure 8. Requirements transition through test phases. agement IPT was assigned to develop, execute, and monitor a mitigation plan. The Risk Management IPT Requirements that did not meet the conditions to be was responsible for managing and documenting the declared met were carried forward into Phase II. Addi- process, while identifying and mitigating specific risks tionally, new functionality implemented in development was spread across all of the IPTs. This process ensured spirals 1 and 2 was added to the requirements set to be that the appropriate IPT took ownership for each tested in this phase. The results of the testing placed significant risk. this new set of requirements into one of four categories. Once the risk manager was convinced that the risk Again, the not tested category was not used in Phase II. had been successfully mitigated, he could apply to the The cycle was repeated for the final Phase III, with Risk Management IPT to retire the risk. When the Risk the four categories becoming met, partially met, not Management IPT was convinced the risk was mitigated, met, and not tested. The not tested category applied to the IPT chair would petition the Systems Engineering requirements that, for one of many possible reasons, were IPT for final risk retirement. not included in the Phase III testing. Reasons included Figure 9 shows the number of risks tracked by program that the testing effort was descoped on the basis of a risk phase; the correlation between testing and risk mitiga- assessment or that the requirement was deleted from the tion is obvious. Identification of program risk tends to original list, meaning that the sponsor no longer consid- precede each test phase as test team members focus on ered it a requirement in the initial version of STORM development products and potential testing issues. Risk with maritime capability. The number of requirements mitigation comes about through a variety of activities in this category was ~16%, and they dealt almost exclu- sively with model Phase I Phase II Phase III rather than with functional 12 testing testing testing representations. Requirements deemed as 10 not met by the test team were 8 also few, at only 3%. After Phase III testing, the spon- 6 sor chaired a final adjudica- tion process that included Number of risks 4 the leads from the Systems 2 Engineering, Requirements, Functional Analysis, CV&T, Requirements Conceptual Acceptance Spiral 1 Spiral 2 Spiral 3 Spiral 4 and Development IPTs. The analysis design testing purpose was to review and obtain final sponsor interpre- Figure 9. Number of program risks, by phase, tracked by the Risk IPT.

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 4 (2011) 335­ R. L. SWEENEY, J. P. HAMMAN, and S. M. BIEMER and must be complemented with testing to provide • Systems engineering provides a framework and essential insights into the risk and confirm the effective- environment for the various divergent support- ness of the mitigation. Although risk was assessed across ing organizations to collaborate in integrating and all aspects of the project—including schedule, resources, delivering a quality product. and model performance—the dominant risk came from Project participants were quick to recognize the role the availability of qualified testers and its impact on of sound systems engineering tenets in keeping the proj- the number of requirements that could be tested and ect on schedule and within budget, mitigating risk and retested if necessary. Mitigation for this risk involved delivering the desired campaign modeling capability to close monitoring of the progress, productivity, and avail- N81. We hope this example will provide future software ability of the testers to provide early visibility to man- project teams the confidence to embrace systems engi- agement when testing organizations were experiencing neering as a dynamic framework for proactive project personnel turnover or resource issues. Through timely management. reviews and management intervention, this risk was not realized; sufficient experienced personnel were available throughout the testing phases. ACKNOWLEDGMENTS: We thank Robbin Beall, Head of OPNAV’s Campaign Analysis Branch, for having the vision to insist on having systems engineering involved CONCLUSION AND LESSONS LEARNED from the beginning, as well as Sunny Conwell and Jerry The APL systems engineering support to N81’s Smith of OPNAV’s World Class Modeling Initiative for STORM+ project was an unqualified success. The pro- empowering systems engineering within the project cess of taking a single-service campaign model and organization at the right level to be effective. We also modifying it to be embraced by another service was a thank the following APL staff members, each of whom daunting task. The systems engineering methodologies applied systems engineering principles in a practical employed ensured that (i) requirements were identi- way to achieve the necessary results: Joseph Kovalchik, fied, verified, and controlled in scope;ii ( ) requirements the first STORM+ systems engineer, who established the were translated into conceptual models that could be role systems engineering would play; Ngocdung Hoang, integrated into existing code; (iii) verification of the who was instrumental in the structure and rigor of the implemented code was based on previously vetted requirements process; Peter Pandolfini, who led the Risk acceptability criteria; and (iv) feedback mechanisms Management IPT; Simone Youngblood, who led the were in place to identify emergent modifications to CV&T IPT; and Alicia Martin, who led the Test Team. Sys- requirements and risk. Several critical lessons were tems engineering support for the STORM+ program was learned from the project: sponsored by OPNAV. • Systems engineering concepts are critical to manag- ing software modification to existing applications. REFERENCES This is especially true for projects of the scope and 1Kossiakoff, A., and Sweet, W. N., Systems Engineering Principles and size of STORM+. Practice: Wiley Series in Systems Engineering and Management, A. P. Sage • (ed.), John Wiley & Sons, Hoboken, NJ (2003). Systems engineering is compatible with and 2Blanchard, B. S., and Fabrycky, W. J. (eds.), Systems Engineering and enhances the relationship between software devel- Analysis (5th Ed.), Prentice Hall, Upper Saddle River, NJ (2010). opment concepts (e.g., agile programming) and tra- 3Kendall, K. E., and Kendall, J. E., and Design ditional requirements and concept development. (8th Ed.), Prentice Hall, Upper Saddle River, NJ (2010). 4Pressman, R. S., Software Engineering: A Practitioner’s Approach • Flexibility in applying systems engineering concepts (7th Ed.), McGraw Hill, New York (2009). is key to maintaining active participation of project 5Chairman of the Joint Chiefs of Staff, The National Military Strat- egy of the United States of America: A Strategy for Today; A Vision for participants that may have a broad range of experi- Tomorrow, http://www.defense.gov/news/mar2005/d20050318nms.pdf ence with systems engineering. (2004).

336 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 4 (2011) APPLICATION OF SYSTEMS ENGINEERING TO SOFTWARE DEVELOPMENT

Robert L. Sweeney is a member of the Senior Pro- The Authors fessional Staff in APL’s National Security Analy- sis Department. A retired U.S. Navy officer with experience in warfighting analysis with both the U.S. Navy and Joint staffs, his current assignment is as the Program Manager for the U.S. Navy’s Resources, Requirements, and Assessment Direc- Robert L. Sweeney Jeffrey P. Hamman Steven M. Biemer torate (OPNAV N8), which includes their World Class Modeling initiative. He has been an instruc- tor in the Systems Engineering Program of The Johns Hopkins University Whiting School of Engineering since 2006. Jeffrey P. Hamman is a member of the Senior Professional Staff in APL’s National Security Analysis Department and the Systems Engineering IPT lead for the STORM+ task. He is an accomplished operations research analyst with more than 20 years of experience in DoD acquisition, M&S, test and evaluation, , operations research, systems analysis, and military aviation weapon systems. Steven M. Biemer is a member of the Principal Professional Staff in APL’s National Security Analysis Department. He is currently the coordinator for APL’s Systems Engineering Competency Advancement program, with the goal of educating and training the technical staff in the latest systems engineering prin- ciples and practices. Before taking on this role, he was the Program Area Manager for Naval Analyses and Assessments, where he worked with U.S. Navy and U.S. Marine Corps organizations to define and conduct analytical assessments of warfighting systems, platforms, architectures, and networks. Mr. Biemer has 25 years of systems engineering and analysis experience with APL. Additionally, he is a curriculum developer and instructor for the Systems Engineering Program of The Johns Hopkins University Whiting School of Engineering. For further information on the work reported here, con- tact Robert Sweeney. His e-mail address is [email protected].

The Johns Hopkins APL Technical Digest can be accessed electronically at www.jhuapl.edu/techdigest.

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 4 (2011) 337­