CAN UNCLASSIFIED

Combat Resource Allocation Support (CORALS) Continuous Integration Strategy Toward Faster Delivery Through Automation of Software Development Build, Test, Integration and Deployment Processes

S. Turgeon DRDC – Valcartier Research Centre

The body of this CAN UNCLASSIFIED document does not contain the required security banners according to DND security standards. However, it must be treated as CAN UNCLASSIFIED and protected appropriately based on the terms and conditions specified on the covering page.

Defence Research and Development Canada Reference Document DRDC-RDDC-2021-D025 February 2021

CAN UNCLASSIFIED

CAN UNCLASSIFIED

IMPORTANT INFORMATIVE STATEMENTS

This document was reviewed for Controlled Goods by Defence Research and Development Canada (DRDC) using the Schedule to the Defence Production Act.

Disclaimer: This publication was prepared by Defence Research and Development Canada an agency of the Department of National Defence. The information contained in this publication has been derived and determined through best practice and adherence to the highest standards of responsible conduct of scientific research. This information is intended for the use of the Department of National Defence, the Canadian Armed Forces (“Canada”) and Public Safety partners and, as permitted, may be shared with academia, industry, Canada’s allies, and the public (“Third Parties”). Any use by, or any reliance on or decisions made based on this publication by Third Parties, are done at their own risk and responsibility. Canada does not assume any liability for any damages or losses which may arise from any use of, or reliance on, the publication.

Endorsement statement: This publication has been published by the Editorial Office of Defence Research and Development Canada, an agency of the Department of National Defence of Canada. Inquiries can be sent to: [email protected].

Template in use: EO Publishing App for SR-RD-EC Eng 2018-12-19_v1 (new disclaimer).dotm

© Her Majesty the Queen in Right of Canada (Department of National Defence), 2021 © Sa Majesté la Reine en droit du Canada (Ministère de la Défense nationale), 2021

CAN UNCLASSIFIED

Abstract

This Reference Document describes the continuous integration (CI) processes currently utilized for the development of software prototype Combat Resource Allocation Support (CORALS). The CI strategy is specifically tailored to support both the research project requirements and to meet the needs of its stakeholders and practitioners.

Several principles, techniques and best practices for automation of software development activities were selected, implemented and tested using tools and technologies, which are detailed in this document.

The positive impacts of this CI strategy are manifold: productivity of the development team has increased, stakeholders have greater access to the latest daily build of the prototype, the application is more stable, and software development process are now well documented, to name a few.

Significance to Defence and Security

There is a holistic, post-agile growing movement in software development that gets more and more attention from organizations worldwide, including the department of national defence; here comes DevOps. The term DevOps is formed from the combination of words development and operations. It is an umbrella term for several iterative and incremental automated development methodologies for transitioning solutions, designed by coders, to the operational community quicker. One of its core principles praises that all organizations working on software should have a well-designed continuous integration strategy in effect.

There are many benefits of automating the software development tasks that comes after code is written. Intrinsically, an automated task is more predictable, repeatable, reliable, and faster. Plus, computerized automated tasks can have side-effects benefits, such as the automatic production of reports for analysis by humans. These reports can help further optimize the automated software engineering scripts, by highlighting problems as well as providing metrics about the code being built, integrated, tested, packaged and deployed automatically.

A continuous integration strategy is especially important for teams of developers. A single and unique contributor to an application codebase may not want to invest time and resources in acquiring, deploying, configuring and maintaining the infrastructure required to run automated software development tasks, especially if he can run all these tasks right from his developer workstation. But, when several persons contribute to the codebase of a software, automation of tasks such as compilation and testing are of the essence not only to save time but also to provide quick feedback to the whole team, confirming whether or not the software was broken by someone’s changes, either by notifying of compilation results, or of automated tests execution reports. Plus, the building process won’t solely reside inside a developer’s head and habits, but will be formerly and fully specified inside reliable, repeatable automated scripts, which, when written following clean code principles, can be considered a form of documentation [1].

DRDC-RDDC-2021-D025 i

Résumé

Ce document de référence décrit la stratégie d’intégration continue présentement utilisée pour le développement du prototype « Combat Resource Allocation Support (CORAL) ». Cette stratégie a été spécialement conçue pour supporter les besoins du projet de recherche ainsi que ceux de ses principaux intervenants.

Pour concevoir cette stratégie, des principes et techniques reconnus pour l’automatisation de plusieurs activités du développement logiciel et de la gestion du cycle de vie des applications ont été sélectionnées, implémentées et testées à l’aide d’une série d’outils et technologies aussi présentés en détail.

Les impacts positifs de la stratégie d’intégration continue sont nombreux: la productivité de l’équipe de développement a augmenté, les utilisateurs ont plus facilement accès à la dernière version journalière du prototype, l’application est plus stable, et les processus de développement logiciel sont mieux documentés, pour n’en nommer que quelques-uns.

Importance pour la défense et la sécurité

Le mouvement DevOps est une évolution holistique du mouvement Agile, qui gagne de plus en plus en popularité dans les organisations à travers le monde, incluant le ministère de la Défense nationale. Le terme DevOps est un amalgame des mots développement et opérations. Il s’agit d’un terme qui couvre plusieurs méthodes itératives et incrémentales pour l’automatisation de d’activités liées au développement de logiciel, permettant aux solutions de passer de la conception vers les opérations plus rapidement. Un de ces principes fondamentaux est l’application d’une stratégie d’intégration continue sur mesure pour chaque organisation qui conçoit du logiciel.

Il y a plusieurs avantages à automatiser les étapes de développement qui viennent à la suite de l’écriture du code. Intrinsèquement, une tâche automatisée devient répétable, plus prévisible, plus fiable et plus rapide. Des tâches automatisées par ordinateur découlent aussi des bénéfices indirects, tels que la production automatique de rapports, qui peuvent être ensuite analysés par les humains. Ces rapports peuvent aider à optimiser davantage les scripts d’automatisation aux différentes étapes du développement logiciel, du fait qu’ils fournissent des métriques détaillant les résultats des étapes intermédiaires de construction, d’intégration, de test, d’archivage et de déploiement.

Le fait d’avoir une stratégie d’intégration continue est d’autant plus important pour une équipe de développeurs. Un unique développeur, seul contributeur au code, pourrait ne pas vouloir investir temps et argent dans l’acquisition, le déploiement, la configuration et la maintenance de l’infrastructure requise pour exécuter des tâches automatisées de développement logiciel. La principale raison reste qu’il peut exécuter toutes ces tâches à partir de son poste personnel de développement. Mais lorsqu’une équipe est constituée de plusieurs membres, qui contribuent tous à l’écriture du code, l’automatisation de tâches devient essentielle. Non seulement l’équipe sauvera du temps, mais aussi parce que l’infrastructure fournira aux membres une rétroaction confirmant qu’une application n’est pas brisée suite à l’intégration de changements au code produit par un membre. Ces confirmations se font par l’envoi de notifications des résultats de compilation ou de tests. L’implémentation d’une stratégie d’intégration continue fait en sorte

ii DRDC-RDDC-2021-D025

que le processus de construction ne réside plus seulement dans la tête d’un développeur, mais se présente sous la forme de spécifications formelles à l’intérieur de scripts fiables, répétables et qui peuvent être automatisés. Lorsque ces scripts sont écrits en appliquant les meilleures pratiques de code propre, il s’agit ici aussi d’une certaine forme de documentation des processus.

DRDC-RDDC-2021-D025 iii

Table of Contents

Abstract ...... i Significance to Defence and Security ...... i Résumé ...... ii Importance pour la défense et la sécurité ...... ii Table of Contents ...... iv List of Figures ...... vi List of Tables ...... vii Acknowledgements ...... viii 1 Introduction ...... 1 1.1 Outline ...... 1 2 Context ...... 2 2.1 Emerging Air Missile Defence (EAMD) Project ...... 2 2.2 Combat Resource Allocation Support Prototype ...... 2 2.2.1 Battle Management Capability ...... 3 2.2.1.1 Automation Algorithms...... 3 2.2.1.2 OMI ...... 4 2.2.2 Stimulation and Simulation Capability ...... 4 2.2.2.1 Scenario Scripting ...... 4 2.2.2.2 Stimulation ...... 5 2.2.2.3 Simulation ...... 5 2.2.2.4 Visualization ...... 5 2.2.2.5 Data Collection and Analysis ...... 5 2.3 Technical Architecture ...... 5 2.3.1 Runtime ...... 5 2.3.2 Codebase ...... 6 3 CORALS Continous Integration Strategy ...... 8 3.1 Team ...... 8 3.1.1 Operation ...... 8 3.1.2 Development ...... 9 3.2 Infrastructure ...... 9 3.2.1 Development Model ...... 9 3.2.2 Hardware Architecture ...... 9 3.2.2.1 Developer Workstations ...... 10 3.2.2.2 Build Servers ...... 11 3.2.2.3 Authentication Server ...... 12 3.2.2.4 Application Lifecycle Management Server ...... 12 3.2.2.5 File Server ...... 12

iv DRDC-RDDC-2021-D025

3.2.2.6 Demonstration Computers ...... 13 3.3 Build Workflows ...... 13 3.3.1 Triggers ...... 14 3.3.2 Client-side Ignore List ...... 15 3.3.3 Feature Branches ...... 15 3.3.4 Workload Distribution ...... 16 3.3.5 Workload Parallelization ...... 16 3.3.6 Incremental Builds ...... 16 3.3.7 Clean Builds ...... 16 3.3.8 Run Impacted Tests Only ...... 17 3.4 Build Workflow As Code ...... 17 3.5 Security ...... 17 3.6 Reports ...... 18 3.7 Dependency Management ...... 18 4 Tools ...... 19 4.1 Hardware ...... 19 4.1.1 Physical Versus Virtual Machines ...... 19 4.2 Software ...... 20 4.2.1 Operating Systems ...... 20 4.2.2 Developer’s Workstation ...... 20 4.2.3 Application Lifecycle Management ...... 21 4.2.4 Version Control System ...... 21 4.2.5 Build Servers ...... 21 4.2.6 Build Pipelines ...... 22 4.2.6.1 Fast Incremental Build ...... 24 4.2.6.2 Pre Transformation Tasks ...... 24 4.2.6.3 External Dependencies ...... 25 4.2.6.4 Jobs ...... 25 4.2.6.5 Post Transformation Tasks ...... 37 4.2.6.6 Traceability ...... 37 4.2.6.7 Build Workflow Authoring ...... 38 5 Conclusion ...... 39 References ...... 40 List of Symbols/Abbreviations/Acronyms/Initialisms ...... 41

DRDC-RDDC-2021-D025

List of Figures

Figure 1: CORALS high level architecture...... 3 Figure 2: CORALS battle management main processes...... 4 Figure 3: CORALS runtime technology stack...... 6 Figure 4: CORALS development technology stack...... 7 Figure 5: BOMNet network topology...... 10 Figure 6: Continuous integration steps and intermediate outputs...... 14 Figure 7: Web designer view of incremental build pipeline jobs and tasks...... 22 Figure 8: Web designer view of clean build pipeline jobs and tasks...... 23 Figure 9: Append RTI to path pre-transformation PowerShell task...... 25 Figure 10: Tasks of the OMI job...... 26 Figure 11: Incremental build process of the OMI job...... 27 Figure 12: Incremental test process of the OMI job...... 29 Figure 13: Tasks of the Java job...... 30 Figure 14: Incremental build process of the Java job...... 31 Figure 15: Incremental test process of the Java job...... 33 Figure 16: Tasks of the Testbed job...... 34 Figure 17: Incremental build process of the Testbed job...... 35 Figure 18: Incremental test process of the Testbed job...... 36

vi DRDC-RDDC-2021-D025

List of Tables

Table 1: CORALS development tools...... 20

DRDC-RDDC-2021-D025 vii

Acknowledgements

My first acknowledgment would go to my colleague Normand Pageau who is always willing to sustain a good discussion about software development best practices.

I would also like to thank the other CORALS developers, most specifically my teammates from company Menya Solutions, who are continuously bringing on the table new and refreshing ideas, and who are always eager to share their past and current experiences acquired using various methodologies.

viii DRDC-RDDC-2021-D025

1 Introduction

Nowadays, software is everywhere. Some would say that software has eaten the world [2]. Software is too important, too critical to accept absence of methodology or poorly enforced development rules while building it. Software engineering patterns, practices and principles have evolved a lot since the genesis of computers. In the early 2000, with the growing popularity of model-driven development, some predicted that in the future code would become auto generated from the design models. They were wrong, code is still being written by teams of humans. Coding is actually the design activity; coding is designing a solution to a problem, often using high level programming languages. Design models are no more than graphical representation of the code used to facilitate communication of the work done. Design model can come before code, but design is not complete until code is written.

Code by itself does nothing. It needs to be processed from its high level, programming language form, which is human-readable, to low level, machine-readable instructions. This task is not done by humans, but by machines, more precisely, by specific types of software applications such as compilers and linkers that run on computers. Once compiled, code becomes an application that, when deployed, can be executed, either by humans or machines. A continuous integration strategy automates the steps that come after the code is written, or modified, by humans.

1.1 Outline

The document is organized as follows:  Section 2 presents the context, including a brief description of the research project that funds this work, and an overview of the CORALS prototype, from both functional and architectural perspectives.  Section 3 describes, at high level, in a tool-agnostic fashion, CORALS continuous integration strategy, while providing alongside entry-level explanations of the software development best practices that inspired each element of the strategy.  Section 4 presents the tools that were selected and that are currently used to implement the strategy, and presents their configurations up to a certain level.  Section 5 presents the conclusion, including recommendations for future improvement to the current strategy.

DRDC-RDDC-2021-D025 1

2 Context

The context described in this section is twofold; first, the scientific project is presented, and then, its software prototype, CORALS, is briefly described.

2.1 Emerging Air Missile Defence (EAMD) Project

Defence Research and Development Canada (DRDC) has been developing the CORALS prototype for more than 15 years under several consecutive research projects. It is the fruit of collaboration between industries, academia and governmental organizations. The current research project, EAMD, is sponsored by the Canadian Forces Maritime Warfare Centre (CFMWC) and Directorate of Naval Requirements (DNR). The high level objective of the EAMD project is to explore, develop, and demonstrate advanced automation and decision support concepts and technologies to enhance battle management effectiveness in joint and combined air and missile defence. The concepts under development are organized in four main themes: [4]  Autonomy: autonomous decision making concepts and capabilities for naval battle management;  Coordination: concepts and solutions to manage co-existence, interactions, and co-operation during distributed EAMD operations;  Interoperability: standards, concepts and solutions to enable and improve interoperability in coalition EAMD operations; and  Integration and maturation: an integrated and sea-going prototype to support all Above Water Warfare (AWW) related activities.

The developed concepts and technologies are mainly demonstrated using an integrated software prototype called CORALS.

2.2 Combat Resource Allocation Support Prototype

CORALS is an automation and decision support capability prototype developed by DRDC – Valcartier Research Centre. CORALS supports naval unit and force operators in the conduct of battle management tasks in the context of air and missile defence operations.

CORALS is composed of two main parts: Battle Management and Stimulation and Simulation. Figure 1 presents the various components of these two parts all connected by a common middleware.

2 DRDC-RDDC-2021-D025

Figure 1: CORALS high level architecture.

2.2.1 Battle Management Capability

CORALS Battle Management Capability has two main components, that is, the automation algorithms (backend) and the Operator-Machine Interface (OMI). CORALS also offers a flexible integration middleware (based on Real Time Innovations (RTI) Data Distribution Service (DDS)), which manages all communications between the capabilities, components and subcomponents. The middleware therefore allows for a separation of the OMI from the automation algorithms. It also offers an interface with the CORALS Stimulation and Simulation Capability that allows for the testing and demonstration of the CORALS Battle Management concepts.

2.2.1.1 Automation Algorithms

CORALS implements various advanced algorithms that cover the entire EAMD detect-to-engage sequence, with a primary focus on threat evaluation, engageability assessment, and effector management processes. CORALS also provides picture compilation solutions to the extent that supports the three primary battle management processes (see Figure 2).

DRDC-RDDC-2021-D025 3

Figure 2: CORALS battle management main processes.

CORALS implements a reactive (rule-based) and a deliberative (intent/capability-based) approach for threat evaluation. It also offers several automation algorithms for effector management, for planning and coordination of hardkill and softkill actions.

For naval task group operations, CORALS provides different force coordination modes, ranging from centralized to fully independent. Most of these modes support cooperative engagements. All automation algorithms take into account the operator’s decisions (e.g., action removed or added to the response plans by the operator) and support collaboration between co-localized and remote operators. CORALS algorithms implement various metrics (such as risk to force, risk to mission, etc.) to evaluate the quality of the unit and force responses.

2.2.1.2 OMI

CORALS OMI was developed to properly support naval operators in the context of task group operations. Several OMI instances (hence several operators) can be simultaneously connected to the same backend or different backends that run the automation algorithms. The aim of CORALS OMI is to provide situation awareness and support the operators in their analysis and decision making tasks. The OMI offers various functional displays with intuitive and efficient human-computer interactions.

2.2.2 Stimulation and Simulation Capability

CORALS offers an advanced Testbed environment for naval single-ship and task group operations. The capability provides the following functions.

2.2.2.1 Scenario Scripting

Based on Stage and Ship Air Defence Model (SADM) software, this capability allows the creation of single-ship and force air and missile defence scenarios and vignettes. Generated scenarios are used to drive

4 DRDC-RDDC-2021-D025

the simulation of the constitutive elements of the synthetic operational environment, such as threats, own force platforms and resources, as well as their interactions with each other and with the surrounding environment.

2.2.2.2 Stimulation

Stimulation capability provides the dynamic data required by the battle management support capabilities to operate. The data may come from a synthetic environment (such as SADM or ODIN), a connection to the Combat Management Systems (CMS)-330 system of a ship (live or recorded), or a combination of both.

2.2.2.3 Simulation

Simulation capability closes the loop on the battle management support capabilities, by simulating their actions up through engagements. The synthetic environment allows for the simulation of blue and red platforms and their resources, background entities, the environment, and their interactions. Current version of CORALS provides an internal low fidelity simulation, as well as, an external medium to high fidelity simulation that uses Stage, SADM and ODIN.

2.2.2.4 Visualization

Geo-referenced visualization tools display the evolution of the scenarios and assists in the assessment of the defence effectiveness. Current version of CORALS uses an internal solution for the 2D display of the tactical picture and SIMDIS for the 3D display of ground truth information when connected to SADM.

2.2.2.5 Data Collection and Analysis

CORALS collects and records all stimulation, simulation and capabilities data required to evaluate the performance of the developed solutions. This data can be replayed without being connected to a simulator or a stimulator afterwards. Tableau can be used to facilitate data analysis of produced data.

2.3 Technical Architecture

Most CORALS software components adopt the publish and subscribe paradigm supported by RTI’s Data Distribution Service (DDS) bus implemented using a set of utilities and libraries.

2.3.1 Runtime

The CORALS runtime (see Figure 3) consists mainly of:  Interfaces with third party visualization and simulation tools;  Interfaces with other Combat Management Systems (CMS) such as the CMS-330 equipping the Halifax class modernized ships;  Application-specific modules for business logics (battle management processes);  Graphical user interfaces (Operator-Machine Interfaces (OMI));  Applications for data collection, recording and replaying;

DRDC-RDDC-2021-D025 5

 Applications for experiment management;  Communication and coordination applications:  Information exchange control (IEC);  Network monitoring; and  Coordination control.  Custom stimulators and low-fidelity simulators.

In addition to CORALS-specific applications, third-party applications (described in Section 2.2) also come in play with CORALS through interfaces, plugins or libraries.

Figure 3: CORALS runtime technology stack.

2.3.2 Codebase

The CORALS codebase (see Figure 4) consists mainly of these types of artefacts:  #, Java, C++ and C++ CLI source code;  Domain-specific business classes;  Interfaces;  Plugins;  Utility libraries for; . Geographical manipulation; . Mathematical operations; . Communication operations; . Data access layering;  Model View ViewModel classes (MVVM) for graphical user interfaces;

6 DRDC-RDDC-2021-D025

 Automated unit and integration tests;  Interface Description Language (IDL) files for DDS CORALS data model;  eXtensible Markup Language (XML) configuration files;  Build definition-as-code; and  Deployment scripts.

Figure 4: CORALS development technology stack.

DRDC-RDDC-2021-D025 7

3 CORALS Continous Integration Strategy

There are different ways of implementing a continuous integration strategy, as well as numerous best practices. This section lists best practices that were selected, tailored for the specificities of CORALS development, and explains the theory behind them.

3.1 Team

The CORALS team is mainly composed of clients from the Canadian Armed Forces (CAF), developers and scientists. For the most part, the role of scientists consists in developing the innovative battle management concepts, in accordance with the client’s requirements. Developers then, from these concepts, perform analysis, design and coding activities that will materialize into new capabilities implemented inside the CORALS integrated prototype.

The size of the team is relatively small compared to teams seen in industries that develop commercialized products. Therefore, each member often ends up with having a broader range of responsibilities related to the development and the operations (DevOps).

3.1.1 Operation

From a continuous integration perspective, and more broadly from a DevOps perspective, the clients and scientists are on the Operation side. Since the main product developed is a research prototype, which does not comply with all requirements of a platform-deployed product, the DevOps loop mainly produces deliverables that are tested and approved by the same scientist that designed the concepts. Therefore, the Operation part of the DevOps loop is not always directly the client, that is, CAF, but more often ends being the scientific community itself. The continuous integration therefore aims at frequently and rapidly delivering new scientific capabilities implemented into CORALS to the scientist who acts as the intermediate stakeholder, since the CAF client has fewer opportunities to see the evolution of the prototype that evolves inside Department of National Defence (DND) research facilities.

If CORALS was not a prototype but a maintained end-product frequently deployed to the military community, its operation environment would be the military facilities and platform where the end-users operate. Since it is a prototype, the Research and Development (R&D) demonstration laboratories (such as DRDC – Valcartier Naval Battle Management Laboratory) are considered the intermediate operation environment that the R&D stakeholders will use to demonstrate CORALS new capabilities. These demonstrations mostly happen when an opportunity arises for the military client to come to the research facilities to discover CORALS newest capabilities.

Although CORALS is not an operational military system, still, the prototype has been demonstrated outside of the laboratory onboard Halifax class ships on a few occasions, during trials and experiments executed at sea. These events are rare opportunities to capture feedback from operators using CORALS for battle management.

8 DRDC-RDDC-2021-D025

3.1.2 Development

The CORALS development team, from a DevOps standpoint, is composed of coders, architects, analysts and information technology (IT) specialists. Not only do they have the responsibility of performing analysis, design and coding activities, but also of developing and maintaining the continuous integration infrastructure, for both its hardware and software parts.

The software development methodology enforced in CORALS is out of scope of this document. However, it is appropriate to mention that each development team member is responsible for maintaining both the codebase and the build infrastructure healthy. That is, when a coder commits new code to the server, it is his responsibility to make sure that the build passes, including pre-checks and post-checks (e.g., tests), and that the results are integral, and properly deployed or deployable to the operation environment. If not, the author of the must either change his code or adapt the continuous integration operations to fix the issues.

3.2 Infrastructure

Each DevOps community has its particularities that should be properly supported by specific and tailored computing resources. The development and demonstration of CORALS mainly occurs at DRDC – Valcartier Research Centre. Development and demonstration activities are held in different rooms in two different buildings (see Figure 5).

3.2.1 Development Model

CORALS is developed under an on-premises model, that is, all computer hardware and software resources are deployed and maintained inside DRDC facilities onto a closed network. This is opposed to the popular cloud models [5] where an industry rent their computing resources up to a certain degree over the Internet, which is called cloud computing:  Infrastructure-As-A-Service (IAAS), where virtualization, server, storage and networking occurs in the cloud;  Platform-As-A-Service (PAAS), which augment IAAS with runtime, middleware and services that runs in the cloud; and  Software-As-A-Service (SAAS), which augment PAAS with application and data services that executes in the cloud.

3.2.2 Hardware Architecture

To perform CORALS research and development activities, a common network was designed and deployed to connect all the development and operation computers together. This local area network (LAN) is called Battle Operation Management Network (BOMNet) (BOM stands for Battle Operation Management). It consists mainly of one or many instances of:  developer workstations;  authentication server;  build servers;

DRDC-RDDC-2021-D025 9

 application lifecycle management (ALM) server;  file server; and  demonstration computers.

Figure 5: BOMNet network topology.

3.2.2.1 Developer Workstations

Each developer of the team has access to a developer workstation. On such computer, the typical applications are:  Integrated Development Environments (IDE) and other development productivity tools;  collaboration tools (such as chat, email, screen sharing);  web browser (for web access to ALM or the Internet1); and  local source control and repository.

The exact list of development applications and tools, with their version number, are found in Section 4.2.2.

1 CORALS is not developed on a network connected to the Internet. Therefore, developers have a second computer for accessing the Internet.

10 DRDC-RDDC-2021-D025

3.2.2.2 Build Servers

Build servers are the computers onto which most of the continuous integration tasks are run. This is why this sub-section presents greater details than the others that relate to computers providing common network service-like tasks.

Considering the relatively small size of the CORALS development team, which varied over the years from a minimum of two and a maximum of ten developers, a pool of four build servers is enough to handle the daily continuous integration requests submitted by the developers.

One important goal of an efficient continuous integration infrastructure is to provide coders quick feedback from the build servers as to whether new code was compiled, tested, integrated and deployed successfully.

For achieving optimization of build server task executions, some software-level strategies can be implemented, such as the ones described in Section 3.3. In addition to applying these best practices and strategies, there is the possibility to improve the raw performance of build servers by using faster pieces of hardware, and by optimizing access times through the use of various hardware optimization technologies.

There are three main hardware considerations to take into account when optimizing the overall time a build task takes to complete:  Network Speed;  Central Processing Unit (CPU) Speed; and  Hard Drive Speed.

It is important to have good network performances for the first step of a continuous integration request execution, which is the downloading of the code from the repository (see Figure 6). Typically, coders commit their code to a central code repository using a version control system. The build system will eventually, depending on its trigger rules, run the tasks to integrate the new code. A build agent will get assigned the build job by the build controller. The first step, from the build agent perspective, is to get the modified code from the code repository which resides somewhere else on the network. When large or numerous new files need to be downloaded, having an efficient and a quick network route between the build agent and the code repository can make a significant difference. Once the new code is present on the build agent’s hard drive, transformation tasks can occur. Transformation tasks will be executed faster on hardware equipped with fast CPU and fast hard drives.

When considering which CPU to select for a build server, not only fast CPU will increase performances, but multi-core CPUs will also be preferred because compilation of some source files can be parallelized over the multiple CPU cores.

Fast hard drives are also important because build servers must frequently write to the hard drive. First, they write the new files they download from the central repository. Also, transformation tasks produce outputs that must be written onto the server hard drive.

Several tests have been performed with various types of hardware configuration for optimization of CORALS compilation time. The use of Solid State Drive (SSD) hard drive, the use of virtual hard drives

DRDC-RDDC-2021-D025 11

emulated in random access memory (RAM), known as RAM Disk, and the use of Redundant Array of Independent Disks (RAID) configurations were tested in order to achieve the best build performances.

Solid State Drives use integrated circuit and flash memory to store data, as opposed to electromechanical drives which use movable magnetic heads to store data on metallic cylinders [6]. Depending on the interface technology used between the computer’s memory and the drive, SSD can be from 5 to 20 times faster than electromechanical drives [7]. RAM disks, as they sound, are disks made of RAM, by reserving and dedicating some of a computer’s RAM to it. Since RAM provides access times much lower than traditional electromechanical hard drives, it was worth trying to compile on a RAM disk. However, the improvement was insignificant, since RAM disks access times are close to SSD access times.

The use of RAID techniques for disk is very popular in the server world. There are many types of RAID configuration, the description of all the possible configurations is out of scope of this document.

The selected hardware solution was RAID-0 configuration of two SSD disks for physical build servers. With a two disk RAID-0 configuration, data is split across the two disk drives which are used simultaneously. This almost doubles data read and write transfer rates that would be achievable with a single disk drive.

3.2.2.3 Authentication Server

The authentication server is used by the network administrators to manage computer, user and service accounts. These security containers are utilized to configure operating systems, the ALM server and the file server for restricting or permitting access to features and file artefacts.

This server can also be used be manage the whole Active Directory domain, mainly groups, network reservations and Group Policies (GPO).

3.2.2.4 Application Lifecycle Management Server

An ALM server helps team members with:  requirements and project management;  code and version management;  software testing and maintenance;  change management; and  continuous integration and release management.

An ALM is typically closely linked with a source control system and a database for storing all application artefacts and their revision history.

3.2.2.5 File Server

CORALS team built a structure of shared folders that are used to:  Store software design documentation;

12 DRDC-RDDC-2021-D025

 Host source control repositories;  Deposit CORALS installation files;  Store installation packages of all development and runtime tools used by coders; and  Store documentation of external tools and libraries.

3.2.2.6 Demonstration Computers

Several demonstration computers are deployed inside the Naval Battle Management Laboratory. This infrastructure can support the demonstration of a naval force composed of many instances of CORALS backend and frontend components distributed over the multiple computer and large shared displays inside the laboratory.

There is also a large horizontal touch table used for demonstration of force-level interactions concepts using an OMI customized for the force commander.

3.3 Build Workflows

In situations where coders do not have an automated build server infrastructure, the build workflow is executed, in a manual, non-automated fashion, on each coder’s development computer. For example, when the coder is happy with the code he just wrote, he compiles it, runs the automated tests, and may also want to deploy it in a test environment and execute some manual tests to determine whether his new code produces the expected results. This can rapidly become repetitive and errors are likely to happen that may even invalidate the results obtained.

The principles of continuous integration seek to minimize human interventions for software assembling and verification tasks since they occur often. In a team composed of a single developer, all tasks are performed on the developer’s workstation. With a team of developers, these tasks are better executed on a server. The server must have, at a minimum, all the same software development tools that are installed on a developer’s workstation in order to be able to build the application.

With continuous integration, the automation of build workflows, including all transformation and validation tasks, is possible by designing build definitions. These are the artefacts that contain all build logic as scripted commands for the build infrastructure to execute.

As opposed to its name, a good continuous integration strategy deals with more than just integration tasks [8]. It should automate tasks related to:  Acquisition of the latest version of the code from source control system;  Pre-transformation tasks:  Code quality checks: . Compliance with standardized, team code style; . Static code quality analysis; . Static code security checks;

DRDC-RDDC-2021-D025 13

 Dependencies management;  Transformation tasks:  Compilation;  Linking;  Post transformation tasks:  Unit testing;  Automated acceptance testing;  Integration testing;  Automatic generation of documentation; and  Packaging and deployment of runtime to stores, websites or using other distribution mechanism depending on the type of application.

Pull code from Pre Transformation Post Packaging source control transformation transformation

•Latest version •Quality •Compiled and • Test report •Installer of code reports linked • Autogenerated •Upgrades application documentation

Figure 6: Continuous integration steps and intermediate outputs.

Build definitions will prescribe what, when and how to build, how to validate the results, where to deposit the results, under what format, and how and when to notify the authors and the stakeholders.

3.3.1 Triggers

Continuous integration automated tasks are usually configured to run automatically under specific conditions; these are called triggers. Depending on many factors (e.g., performance of the build infrastructure, size of the team, duration of pre and post transformation tasks) one may choose to define triggers that command the build infrastructure to run as soon as new code gets uploaded to the code repository. One may also define scheduled triggers that would run tasks each night, or at the end of each development iteration (especially if they are very short). Typically, a mix of both time-based and event-based triggers are used, and this is the case for CORALS CI strategy. Faster, incremental builds are triggered after daily commits (see Section 3.3.6) where only impacted tests are run (see Section 3.3.8), and complete, longer clean builds (see Section 3.3.7) are triggered every week day during the night where all tests, including integration ones, are run.

The simplest trigger strategy is to trigger automatically the complete set of build tasks each time someone commits new code to the code repository. However, this strategy may not be the most appropriate for one or many of these reasons:  A manual code review by a peer may be required, especially after less experimented coders commit new code, to highlight places where code could be improved;

14 DRDC-RDDC-2021-D025

 The infrastructure that runs the automated continuous integration tasks is too slow to run and give reports to the author in a timely manner;  The codebase impacted by the new or modified code, including automated tests, is too big and too complex to be compiled and run in a timely manner; and  The number of contributors to the codebase is too high compared to the performance of the infrastructure can provide when simultaneous build requests are pushed.

There exist techniques to mitigate the build infrastructure overload issues resulting from solely relying on pure continuous integration triggers, that is, rebuild everything every time anything changes. This section details some mitigation strategies and techniques used for the design of the CORALS build definition files and schedules aimed at increasing overall efficiencies of builds.

3.3.2 Client-side Ignore List

When comes the time to commit modifications to source control, the modern IDE scans, in background, the developer’s local code files to detect changes made within the project’s directories that will constitute the commit. Some files, though, are not authored by programmers, but generated automatically by the IDE. They serve different purposes; for example, they could store user-specific customizations to IDE (such as color themes of code editors). Such files should not be stored onto the server code repository where they would trigger unnecessary build jobs. An ignore list will contain extensions of files that should not trigger continuous integration.

To facilitate the compilation of known file extensions that usually are not code-related, and that should therefore not trigger any build on the server, there exists on the internet a panoply of pre-defined ignore list for almost all IDE and programming languages. This ignore list is usually shared amongst the team members and is therefore stored in the source control system so that everybody can have the exact same specification.

3.3.3 Feature Branches

When a requirement gets selected for implementation by the team, one or many developers are assigned the work. It is good practice to isolate the new work done to minimize the impact this will have on others working on other parts of the system. For doing so, a copy (clone) of the code gets created, and the new or modified pieces of code get committed to this place as they are ready. When the work is completed, the created copy gets merged back the master code repository. This is called feature branching. The creation of feature branches is facilitated by the use of a distributed source control system such as the one used for CORALS development, described in Section 4.2.4.

The whole branching strategy [9] of CORALS is out of the scope of this document. However, it is worth mentioning that feature branches, when used, will have an impact on the organization of build definitions and the design of the overall build workflow. For example, when someone commits to a feature branch, it should not trigger the same continuous integration tasks than when someone commits to the master repository. Applications built are not releasable until they are merged to the master branch, from which clean builds execute (see Section 3.3.7) and produce releasable binaries.

DRDC-RDDC-2021-D025 15

3.3.4 Workload Distribution

A build server infrastructure composed of a single build server instance may see its performance degrade with a team of coders growing in size, committing many changes per day that triggers always the same single server. To prevent this from happening, there is the possibility to distribute concurrent build requests over a pool of build servers available inside the build infrastructure, as seen in Figure 5.

3.3.5 Workload Parallelization

Build server infrastructure performances can also degrade as the complexity of the codebase increases. To mitigate this, build definitions are often decomposed in many independent smaller tasks that can be executed or queued in parallel on multiple idle build servers.

3.3.6 Incremental Builds

Continuous integration gets usually triggered when code that has changed needs to be rebuilt. With large codebase, or saturated infrastructure, having incremental builds is critical to have fast builds that give feedback to the coder in a timely fashion. The consensus is that an incremental build should not take more than five minutes to complete.

The concept of build avoidance could be summarized as building only when it is needed, and only what is needed. It is good practice in software engineering to, whenever possible, design application as many independent modules with clear separation of concerns. This results in more classes, function, methods each with minimized and specialized responsibilities. This way when a loosely coupled piece of code is changed, only a minimal portion of other pieces of code will be impacted and will need to be rebuilt.

Incremental builds recycle artefacts from previous builds that are not impacted by the upcoming code modifications, instead of building all binaries and executing all pre and post build tasks all the time. The integration will still produce up-to-date binaries including the latest code changes, by integrating previous and current binaries into the updated application. An incremental build basically answer the coder’s question: “Have I broken something?” For the feedback to be quick, only the minimal build activities required to answer that question, including regression testing, are executed.

3.3.7 Clean Builds

Clean builds are the opposite of incremental builds. Nothing is recycled; everything is rebuilt from scratch from empty directories. These builds takes longer to complete, this is why they are typically executed outside normal work hours, so that they do not monopolize the build infrastructure. This is useful to deal with modifications that cannot always be caught by incremental builds, such as modifications made to non-compilable artefacts like xml files or strings.

Clean builds are performed before a release. More tests are performed, such as integration tests, against the newly built application. A clean build purpose is to get, up to a certain level, confidence that the application works as expected. It answers the question “Is the application releasable in production?”

16 DRDC-RDDC-2021-D025

3.3.8 Run Impacted Tests Only

In a test-driven development team, working on an application with hundreds of thousands of line of codes, such as CORALS, can result in a test suite that quickly becomes large and that contains a lot of unit and integration tests. By default, these tests are run each time new code is committed and built by the server. Even when tests follow the best practice of being fast and independent of slow resources, when a few thousands tests are run after each build, this can result in significant delays in continuous integration tasks execution.

One way to mitigate this is to run only tests that are impacted by the incoming code change. It works by analyzing the call-graph of the source code to work out which tests should be run after a change to production code [10]. This is a good practice for daily incremental builds, when coders want to have quick feedback about their code changes. But for nightly builds, programmed to happen at night when nobody’s working, it is recommended to run all tests because there are still rare situations when the machine cannot detect that a test may be impacted by a change. For example, if we have a test for a unit of code that reads a text file and produces some outputs. Changing the content of the text file is not a change to code; therefore, the build system cannot predict that the test will be impacted by the change inside the file that contains textual inputs of the test.

3.4 Build Workflow As Code

A continuous integration strategy gets implemented for the most part by automating build workflows, either by scripting build instructions in a text editor, or by using graphical user interface provided by continuous integration server application. There is a growing trend to “everything as code,” [11] which is treating all parts of a system as code, even infrastructure, in some cases [12]. The same applies to continuous integration artefacts. Whenever possible, it is recommended to define build tasks as code (or scripts), that can reside into source control systems, therefore facilitating change and version management. With this method, the continuous integration artefacts get all the advantages that source-controlled code have, such as traceability, auditability, meta-data, comparability, liability, time stamping, comments and revertability. Because build workflow is a design activity, the resulting artefacts become part of the intellectual property of the organization, the same way code is. A build workflow written as clean code is good documentation of the software engineering applied practices, as it contains the sum of the design decisions that were made while designing this workflow.

3.5 Security

As mentioned in Section 3.1, CORALS team is small, thus its members have more than just one role. In larger teams, different people may be responsible for different part of the build infrastructure. With responsibility come liability, therefore, it is reasonable for one practitioner to restrict access to the part of the infrastructure under his responsibility. However, the CORALS development team is not a production but a research team. Therefore, impact of erroneous configuration of some part of the build infrastructure will be limited to a few individuals. Moreover, product release deadlines are not too rigid so the impact of the inferred delay will be less critical. Since all members have administration privileges on all part of the infrastructure, everyone can fix someone else’s error rapidly.

DRDC-RDDC-2021-D025 17

With this kind of liberty come some rules that must be enforced, such as documentation of changes to the build workflows, and traceability of authoring. Tools described in Section 4 allow users to keep a history of what modifications were made by whom to the build workflows.

3.6 Reports

A continuous integration strategy should be very conservative; that is, it should fail if any of its steps fails, and alert the developer (author of the error) accordingly. Typical reasons for build to fail are:  Code does not compile;  Tests are failing; and  Code is non-compliant with quality standards.

A report containing failure details, including the content of the commit (new or modified code), and the author identity, is included so that appropriate actions can be taken to investigate and fix the issues.

3.7 Dependency Management

When building software, it is advised to not write everything from scratch. Many solutions already exist for many problems, either inside the open source, commercial or even inside organizational ecosystems of libraries. However, utilizing an external library means becoming dependent on that library. The more dependencies an application has, the more complicated it becomes to smartly load these dependencies without sacrificing performance or the lean quality of an application.

Having no dependency management strategy at all is called implicit versioning. The default is to load all external libraries into memory when application is compiled or run. From the perspective of code organization, it translates into a single project that references all dependencies. From a continuous integration standpoint, this means longer time to download the codebase and compile it, since for each build that happens on the server, the whole codebase must be acquired, and the whole code must be compiled.

A dependency is a package consumed by a client application. This is generally managed in the form of a dynamically linked library, whose structure may be verified at design time, but is only truly resolved at run time. One frequent issue seen with dynamic libraries is either the binary on which a consumer application depends will be missing on the targeted deployment environment, or exaggeratedly duplicated, thus increasing the size of the resulting application.

Another form of dependency is the inherited dependency, whose resolution happens at compile time. One common issue with this kind of dependency is the circular referencing. That is, one module that refers to another one that refers to the first one, thus making it impossible to compile dependencies before the application that consumes them.

There exist dependency management tools to help developers properly manage dependencies in order to avoid seeing the application grow too much in size and complexity. The tools and strategies used for CORALS dependency management are described in Section 4.

18 DRDC-RDDC-2021-D025

4 Tools

This section presents the tools and technologies used for implementation of CORALS CI strategy whose techniques and principle are presented in Section 3. Configuration of selected tools and technologies is also presented to some extent. This section is split in two subsections: hardware technologies and software technologies.

4.1 Hardware

The hardware underlying the continuous integration infrastructure is all commercial off the shelf products (COTS) acquired from popular and renowned vendors. The role of each computer and server of the infrastructure is already described in Section 3.2.2.

The network (pictured in Figure 1) is all copper-based, and provides speeds of up to 1000 Megabits/s (1 Gigabit/s).

4.1.1 Physical Versus Virtual Machines

All workstations, such as the ones found in development and demonstration areas, are physical desktops and laptops. These workstations are more graphic intensive, and are sometimes connected to a multiple monitor setup and to keyboard video mouse switches. For these reasons, it was more appropriate to dedicate physical machine to these purposes instead of virtual machines.

Servers are a mix of both physical and virtual machines. The virtual machine infrastructure is provided by a VMWare ESXi 6.7 operating system running on a Dell PowerEdge server equipped with 128 Gigabytes (GB) of RAM and 4 Terabytes (TB) of storage.

The ALM server runs on a virtual machine, to make it easier to take backups of it in order to be able to revert back to a previous state of the server if some changes in its configuration produce unexpected results.

Two build servers are virtual machines, and the two others are physical with RAID-0 disk configuration for faster performances (see Section 3.2.2.2). The reason why we have two virtual build servers is that the ESXi infrastructure had still plenty of available resources for letting these two build servers run in parallel, and they were easy to setup. They to do not provide the same high-end performance of the two physical build servers, however having numerous build servers, even with mid-range performance, is helpful in allowing multiple build requests run in parallel (see Section 3.3.4).

Both the authentication and file servers are physical servers. Their disk configuration is RAID-5, which provides backup security up to a certain level. Such dedicated disk configuration is not possible to have on a shared ESXi virtual infrastructure.

DRDC-RDDC-2021-D025 19

4.2 Software

The software ecosystem which runs on the hardware infrastructure is a mix of open-source and COTS products. The open-source tools and technologies that were selected all have a well-established, large enough community of users and proponents, to facilitate support and maintenance.

4.2.1 Operating Systems

All computers run Microsoft Windows as their operating system, except for the virtualization server which runs ESXi 6.7, but on top of which run only Microsoft Windows based virtual machines. There is a mix of Windows 7 and Windows 10 computers, all professional editions, for build servers, development and demonstration workstations. File, authentication and ALM servers have Windows 2016 Server operating systems.

The Active Directory domain that links all workstation and server computers together is managed from the authentication server by the domain administrator (see Section 3.2.2.3 for details).

Security software and components are disabled in order to let DDS traffic go through without obstacles and to facilitate remote accesses. This represents minimal risk as the whole infrastructure runs on a stand-alone network disconnected from the Internet. Moreover, everything that is introduced in this network is virus scanned beforehand.

4.2.2 Developer’s Workstation

A developer workstation is composed of a series of software development tools. The following table lists the minimal toolset a developer must have:

Table 1: CORALS development tools.

Product Name Vendor Edition Version Description

Visual Studio Microsoft Enterprise 2017 v15.9 IDE

Visual Studio Microsoft Enterprise 2013 For compilation of legacy third party libraries

.Net Framework Microsoft n/a 4.5.1 .Net development

NetBeans Apache n/a 11.1 IDE

Open Java Red Hat Standard 11 Java development Development Kit

Gradle Apache Basic 5.4 Build and test automation

ReSharper Jet Brains Ultimate 2018.3.4 Productivity plugin for Visual Studio

RTI Connext RTI Professional 5.3.0 DDS middleware development

20 DRDC-RDDC-2021-D025

Developers work on a local copy of the code that resides on their hard drive. The master copy is stored on the ALM server which is described in the next section. The source control software used to perform actions on both local and remote repositories, , is described in Section 4.2.4. Visual Studio and NetBeans both have a built-in Git client for performing Git operations.

A developer’s workstation also contains all common office productivity such as Microsoft Office 2016, and a web browser for accessing the ALM server through its web user interfaces.

4.2.3 Application Lifecycle Management

As described in Section 3.2.2.4, an ALM application serves many purposes. Azure DevOps Server 2019.3 is used by the team for CORALS lifecycle management. It is installed on-premises (as described in Section 3.2.1), and developers access it using its Uniform Resource Locator (URL) on the BOMNet network, either from a source control client, the web browser or the IDE.

In addition to the roles described in Section 3.2.2.4, Azure DevOps server also provides wiki functionalities. The wiki is the central location where developers share their knowledge acquired during the development of CORALS about techniques, technologies, tools, methodologies, processes and more. The topics it covers are unlimited; it is as varied as their contributors want it to be.

The way it was decided to configure Azure DevOps 2019 with regards to source control, work items management, build definitions, testing and reporting is all described in the next sections.

4.2.4 Version Control System

Git is the source control system (also called version control system) used by CORALS development team. Azure DevOps 2019 has built-in Git functionalities. Git being a distributed revision control system, there is a remote (master) repository that resides on the Azure DevOps server, and all developers have their own private local copy of the remote repository, including all branches and history of changes.

Before a developer can contribute to the code, he needs to clone the master Git repository to a local Git repository that resides on the hard drive of his development workstation. From there the developer will perform, on a daily basis, common Git operations of synchronization, pulling and pushing new commits, comparing versions of files, etc. The description of Git features is out of scope of this document.

4.2.5 Build Servers

With Azure DevOps, build servers are called build agents. The server onto which Azure DevOps is installed acts as the build controller. When a build request is triggered, the controller selects which build agents will actually perform the continuous integration operations.

Agents are organized in pools. A pool of agents regroups all agents which share the same build capabilities. From the Azure DevOps standpoint, a capability is a build functionality. For example, if a build job requires a Linux build agent, the build controller would add the request to the Linux agent pool queue. In the case of CORALS, all agents are configured the same way, so there is only one pool composed of four agents which all have the same build capabilities.

DRDC-RDDC-2021-D025 21

4.2.6 Build Pipelines

Build pipeline is the term used in Azure DevOps for build definitions. Build pipelines can be described as process templates. It is a set of configured tasks that will be executed one after another on the source code every time a build is triggered. Tasks can be grouped in Jobs.

Build pipelines commands can be either manually scripted using YAML Ain't Markup Language (YAML) from a text editor, or designed using Azure DevOps graphical pipeline designer from a web browser.

There ae two build pipelines defined for CORALS CI:  CORALS incremental build (see Figure 7), for daily commits (see Section 3.3.6); and  CORALS clean build (see Figure 8), for CORALS releases (see Section 3.3.7).

Figure 7: Web designer view of incremental build pipeline jobs and tasks.

22 DRDC-RDDC-2021-D025

Figure 8: Web designer view of clean build pipeline jobs and tasks.

DRDC-RDDC-2021-D025 23

It was also decided to have pipelines build all CORALS modules, instead of having one pipeline for each programming language or for each module (executable or library). Git does not naturally support downloading only subparts of a Git code repository, and we did not want to have more than one master Git repository.

The next sub-sections describe the many CI workflow steps and aspects managed inside build pipelines.

4.2.6.1 Fast Incremental Build

Both CORALS build pipelines (see Sections 3.3.6 and 3.3.7) have similar logic of build and test tasks execution, with different parameters for incremental build to speed it up:  the clean build pipeline does have the “Clean” parameter set to true in the “Get Sources” job; it is false in the incremental build (see Sections 3.3.6 and 3.3.7);  the incremental pipeline does have the “Run only impacted tests” parameter (see Section 3.3.8) set to true in the “Test Solution CORALS” task of job “OMI” as well as in the “Test Solution Testbed” task of job “Testbed”; and  in the incremental pipeline, the integration tests, that takes longer to run, are not executed.

4.2.6.2 Pre Transformation Tasks

The only pre-transformation task currently implemented in both pipelines deals with the setting of system variables, through a Power Shell script, that point to the location of RTI DDS license and libraries so that unit and integration tests that require them run properly (see Figure 9).

24 DRDC-RDDC-2021-D025

Figure 9: Append RTI to path pre-transformation PowerShell task.

4.2.6.3 External Dependencies

When a CI infrastructure is connected to the Internet, a build pipeline may include pre-transformation tasks that retrieve latest versions of third-party dependencies from the Internet. Since CORALS is developed and built on a standalone network, dependencies, such as open-source utility libraries, are all included in source control and are retrieved at the same time the code is (see Figure 6).

Consequently, when a developer wants to update an external dependency to a newer version, he will have to commit the new version of the library to its local Git repository, test that everything works as expected, run unit tests, and then commit the changes to the remote master repository on Azure DevOps server where the rest of the checks will occur. This way, all other team members, as well as the build agents, will have access to the new libraries.

4.2.6.4 Jobs

CORALS incremental build pipeline is organized in three jobs:  OMI;

DRDC-RDDC-2021-D025 25

 Java; and  Testbed.

Having separated the whole continuous integration execution in three different, independent jobs allows the request to be executed faster, since the three jobs can be parallelized (see Section 3.3.5) on three different build agents taken from the pool. Jobs are not executed in any predefined order.

Jobs are further decomposed in tasks. Tasks inside a job run sequentially, and are configured to run only if all previous tasks succeeded.

Jobs largely use Azure DevOps built-in variables in order to access the various artefacts resulting from the execution of the jobs tasks (as an example, see the use of “$(Build.SourcesDirectory)” in Figure 9).

4.2.6.4.1 OMI Job

This job deals with the execution of tasks related to building the OMI and other C# projects (see Figure 10). The OMI is the main CORALS application that must be run first by an end-user in order to start the system. It builds all projects that compose the CORALS solution file.

Figure 10: Tasks of the OMI job.

After the code, solution file and all project files have all been downloaded on the build agent via the “Get Sources” job, the build process (see Figure 11) starts. On the build server, all commands are executed by the build agent from a command prompt by calling consoles applications. The MSBuild.exe application is called with all the options found in the solution and project files, which are reproduced automatically by Azure DevOps as command-line parameters.

26 DRDC-RDDC-2021-D025

Figure 11: Incremental build process of the OMI job.

As the build executes, numerous log entries are displayed in the build window on the Azure DevOps pipeline execution web page. From the logs we can also see which agent is executing the job, and in which specific directory the code is being built and results are being written. This log is retained after the job completes and is accessible from the Azure DevOps webpage that displays the build results.

DRDC-RDDC-2021-D025 27

Once the code has successfully compiled, test tasks are executed with MSTest.exe, with Azure DevOps merging options found in both pipeline and project files, and translating them to command-line parameters. Test results and associated information are then published to be accessible from the test report attached to the build (see Figure 12). Tests are reported under the xml format that Azure DevOps server can understand and display, on demand, inside a browser from within its web user interface.

28 DRDC-RDDC-2021-D025

Figure 12: Incremental test process of the OMI job.

DRDC-RDDC-2021-D025 29

4.2.6.4.2 Java Job

This job deals with the execution of tasks related to building all Java applications, including the backend battle management algorithms, and some stimulation and simulation applications and tools (see Figure 13). It uses the same Gradle build system used in NetBeans to build all projects that compose the Java root Gradle project.

Figure 13: Tasks of the Java job.

The Gradle jobs leverage build logics bundled with Gradle as well as custom build logic defined in custom Gradle tasks.

After the Java code and all Gradle project and setting files have all been downloaded on the build agent, the build process starts. The Gradle application is called by the job (see Figure 14) with parameters found in the Gradle project, the Gradle settings files and in the pipeline. Since Gradle binaries (the Gradle application) are included in the source control, the Java job will use the same Gradle application version used by developers inside NetBeans on their development workstations.

30 DRDC-RDDC-2021-D025

Figure 14: Incremental build process of the Java job.

Similarly to the OMI job, when the build executes, log entries are displayed in the build window on the Azure DevOps pipeline execution web page. From the logs we can also see which agent is executing the job, and in which specific directory the code is being built and results are being written. This log is retained after the job completes and is accessible from the Azure DevOps webpage that displays the build results.

DRDC-RDDC-2021-D025 31

Once the build has successfully completed, the Gradle test job is executed (see Figure 15). It calls the Gradle test tasks specified in the pipeline. That task must already exist in Gradle project files. The test task calls JUnit.exe, with parameters found both in pipeline and in Gradle files. As for the OMI job, in the clean pipeline, all tests are executed. In the incremental build pipeline, only unit tests exercising changed code are executed (see Section 3.3.7).

32 DRDC-RDDC-2021-D025

Figure 15: Incremental test process of the Java job.

DRDC-RDDC-2021-D025 33

4.2.6.4.3 Testbed Job

This job deals with the execution of tasks related to building the non-java Testbed applications and libraries (see Figure 16). It builds all projects that compose the Testbed solution file.

Figure 16: Tasks of the Testbed job.

After the code, solution file and all project files have been downloaded on the build agent, the build processes (see Figure 17) starts. On the build server, all commands are executed by the build agent from a command prompt by calling consoles applications The MSBuild.exe application is called with all the options found in the solution and project files. Most Testbed components are programmed using C++, so the C++ compiler and linker are called by MSBuild.exe.

34 DRDC-RDDC-2021-D025

Figure 17: Incremental build process of the Testbed job.

As for all jobs, when the build executes, log entries are displayed in the build window on the Azure DevOps pipeline execution web page. From the logs we can also see which agent is executing the request, and in which specific directory the code is being built. This log is retained after the job completes and is accessible from the Azure DevOps webpage that display the build results.

DRDC-RDDC-2021-D025 35

Once the build has successfully completed, test tasks are executed with MSTest.exe, with Azure DevOps merging options found in both pipeline and project files, and translating them to command-line parameters. Test results and associated information are then published to be accessible from the test report attached to the build (see Figure 18). Tests are reported under the xml format that Azure DevOps server can understand and display, on demand, inside a browser from within its web user interface.

Figure 18: Incremental test process of the Testbed job.

36 DRDC-RDDC-2021-D025

4.2.6.5 Post Transformation Tasks

The post-transformation tasks currently in the pipelines deal with test, reporting and basic release management activities. They are included inside the three jobs described in the previous sections.

The clean build pipeline does basic release management through its tasks (see all Copy Tasks in Figure 8) that assemble all successfully built and tested applications and libraries into one package. It is then copied as a whole on a network shared release folder for distribution on BOMNet’s file server (see Figure 5). This last phase is only executed in the Clean Build Pipeline, and only when all previous jobs have successfully completed.

4.2.6.6 Traceability

The normal workflow of a coder is: 1. Getting assigned a requirement to work on (can be a bug fix); 2. Creating a Work Item in Azure DevOps, in which both he and the system will record traces of each development steps. The coder can organize the work items in many ways. Azure DevOps offers great requirement management features that are out of scope of this document; 3. Creating a feature branch, when necessary, to isolate his work from others to minimize impact of his modifications on the rest of the system; 4. Designing the solution to fulfill the requirement, code it, then commit it to his local Git repository; 5. Committing his modifications (including the feature branch), on the master Git repository. From there, he can, if he wants, create a pull request so that another member can perform code review on his work. From there, there is an incremental build that will get triggered automatically and that will provide build and test results; and 6. Once everybody’s happy with the new code, and when it builds and tests ok, the feature branch, if any, is merged to the master branch. Then, another incremental build is triggered, and during the night, the more complete clean build is performed, confirming that everything is ok.

For all these steps, Azure DevOps keeps traces and links between the processes, artefacts, authors, logs, reports and results. That’s the main advantage of a fully integrated, single ALM solution.

When a coder commits new code, a commit unique identifier is assigned to its modifications by the Git system. The coder’s name will be attached to the commit instance. Before committing, it is the responsibility of the coder to link his commit to the work item being implemented by providing the Work Item number along with his commit comments. Triggers then will make the build start. Azure DevOps will automatically link the build results to the commit identifier, including test reports. This way, anyone can a posteriori analyse and investigate all work done on the application by anyone, end-to-end, and crawl back from the binaries up to the requirements, and have access to all intermediate outputs.

DRDC-RDDC-2021-D025 37

4.2.6.7 Build Workflow Authoring

Traceability of modifications is inherent to source control system. When pipeline-as-code are used and modified, the source control automatically captures changes made to the build workflows. If a team member, however, changes the workflow directly from the Azure DevOps web page using the graphical designer, it is his responsibility to add a comment before saving his changes, so that another team member can at a glance see what modifications were made, without having to compare screenshots of configuration pages.

38 DRDC-RDDC-2021-D025

5 Conclusion

Even though CORALS prototype and its development team already see benefits and returns from the investments made in the application and automation of the CI strategy described in this document, there is still room for improvements that could further increase productivity.

One natural future step would be to implement continuous deployment (CD). As a matter of fact, when we read about CI, it is almost always accompanied with CD, through the use of term “CI/CD.” The DevOps paradigm is not fully implemented in its essence without closing the loop with automatic delivery to end-users. In our case, that would mean automatically deploying CORALS binaries built each night. Of course, this would be conditional to all steps having completed successfully.

Continuous Deployment is available in Azure DevOps through its Release Management features. Release pipelines must be defined to provide the ALM with details about the target production environment and rules to follow. One rule could be to backup the previous environments before overwriting it. In our case, the production environment is the Naval Battle Management Laboratory where the stakeholders get a chance to see CORALS demonstrated.

As briefly mentioned in Section 3.4, infrastructure-as-code (IaC) could be leveraged inside a CD strategy. There are numerous possibilities with IaC. One that would be interesting to investigate and test is the automatic building of virtual machines from scratch as intermediate test environments to deploy CORALS periodically. Since virtual machines run on server, they can be accessed from any workstation on BOMNet, therefore letting the stakeholders see and test rapidly and easily CORALS new features from the comfort of their office.

One aspect that got just little attention so far in CORALS CI strategy is automated quality checks. These could be implemented during the pre-transformation and post-transformation phases of a CI execution. For example, the server could check that the syntax of the code respects the standard established in the development team, such as the number of white spaces used for indentation. These checks are currently performed in real-time while developers write their code by ReSharper for C# and C++ code. But there is no such tool currently used for Java code written in NetBeans. Moreover, ReSharper does not prevent someone from committing code that violates standards. DevOps Server could, if it was programmed to detect it, abort a build that does not respect the standard, and notify the team of the failure. This is one example of a pre-transformation validity check that could be automated and integrated into CORALS CI strategy.

Another example, that is a post-transformation check, would be to calculate the code coverage rate obtained by unit tests. Code coverage is a widely used metrics for teams of developers that calculates and automatically provides the percentage of code that is covered (called) by unit tests. In theory, in an ideal world, especially for test-driven development teams such as CORALS, close to 90 percent code coverage should be the objective. An automated, post-transformation task that would report on the newest code coverage rate obtained by a new release, would inform the team whenever they deviate too much from the target percentage.

DRDC-RDDC-2021-D025 39

References

[1] Martin, C.R. (2008), Clean Code, A Handbook of Agile Software Craftsmanship, Prenctice Hall Publisher.

[2] Andreessen, M. (2011), Why Software Is Eating The World, August 2012, Wall Street Journal, Retrieved May 29, 2020 from: https://www.wsj.com/articles/SB10001424053111903480904576512250915629460.

[3] Reeves, J. (1992), What is Software Design? In C++ Journal, Fall 1992, Retrieved May 29, 2020 from: http://www.developerdotstar.com/mag/articles/reeves_design.html.

[4] Benaskeur, A. (2017), EAMD (01bc) Project, Battle Management Activities, Internal Communication (Presentation PowerPoint).

[5] Watts, S., Raza, M. SaaS vs PaaS vs IaaS: What’s the Difference and How To Choose, Retrieved April 2020 from: https://www.bmc.com/blogs/saas-vs-paas-vs-iaas-whats-the-difference-and-how-to- choose/.

[6] Solid State Drive, Retrieved April 2020 from: https://en.wikipedia.org/wiki/Solid-state_drive.

[7] Hahn, M. (2020), How Much Faster is the SSD Drive than a Conventional HDD in Actual Use, Retrieved May 2020 from: https://www.quora.com/How-much-faster-is-the-SSD-Drive-than-a- conventional-HDD-in-actual-use.

[8] Behrens, B.C. (2020), Microsoft Azure DevOps Engineer: Creating an Automated Build Workflow, Pluralsight.

[9] Chamberland, S. (2019), Proposed Git Branching Strategy.

[10] Hammant, P. (2017), The Rise of Test Impact Analysis, Retrieved June 2020 from: https://martinfowler.com/articles/rise-test-impact-analysis.html.

[11] String, D. (2019), Everything-as-Code, Save Everything as Code – Configuration, Infrastructure and Pipelines, Open Practice Library, Retrieved April 2020 from: https://openpracticelibrary.com/practice/everything-as-code/.

[12] Infrastructure As Code, Retrieved April 2020 from: https://en.wikipedia.org/wiki/Infrastructure_as_code.

40 DRDC-RDDC-2021-D025

List of Symbols/Abbreviations/Acronyms/Initialisms

AEC Advanced Engageability Capability ALM application lifecycle management ATEC Advanced Threat Evaluation Capability AWW Above Water Warfare BOM Battle Operation Management BOMNet Battle Operation Management Network CAF Canadian Armed Forces CFMWC Canadian Forces Maritime Warfare Centre CI continuous integration CMS Combat Management Systems CORALS Combat Resource Allocation Support COTS commercial off the shelf CPU Central Processing Unit DevOps development and operations DND Department of National Defence DNR Directorate of Naval Requirements DRDC Defence Research and Development Canada EAMD Emerging Air Missile Defence ESX Elastic Sky X GB Gigabyte GPO Group Policy IaC infrastructure-as-code IAAS Infrastructure-As-A-Service IDE Integrated Development Environment IDL Interface Description Language IEC information exchange control IT information technology LAN local area network MVVM Model View ViewModel OMI Operator-Machine Interface

DRDC-RDDC-2021-D025 41

PAAS Platform-As-A-Service RAID Redundant Array of Independent Disks RAM random access memory R&D Research and Development SGD Strike Group Defender RTI Real Time Innovations SAAS Software-As-A-Service SADM Ship Air Defence Model SSD Solid State Drive VM Virtual Machine TB Terabyte URL Uniform Resource Locator XML eXtensible Markup Language YAML YAML Ain’t Markup Language

42 DRDC-RDDC-2021-D025

DOCUMENT CONTROL DATA *Security markings for the title, authors, abstract and keywords must be entered when the document is sensitive 1. ORIGINATOR (Name and address of the organization preparing the document. 2a. SECURITY MARKING A DRDC Centre sponsoring a contractor's report, or tasking agency, is entered (Overall security marking of the document including in Section 8.) special supplemental markings if applicable.)

DRDC – Valcartier Research Centre CAN UNCLASSIFIED Defence Research and Development Canada 2459 route de la Bravoure Québec (Québec) G3J 1X5 2b. CONTROLLED GOODS Canada NON-CONTROLLED GOODS DMC A

3. TITLE (The document title and sub-title as indicated on the title page.)

Combat Resource Allocation Support (CORALS) Continuous Integration Strategy: Toward Faster Delivery Through Automation of Software Development Build, Test, Integration and Deployment Processes

4. AUTHORS (Last name, followed by initials – ranks, titles, etc., not to be used)

Turgeon, S.

5. DATE OF PUBLICATION 6a. NO. OF PAGES 6b. NO. OF REFS (Month and year of publication of document.) (Total pages, including (Total references cited.) Annexes, excluding DCD, covering and verso pages.) February 2021 50 12

7. DOCUMENT CATEGORY (e.g., Scientific Report, Contract Report, Scientific Letter.)

Reference Document

8. SPONSORING CENTRE (The name and address of the department project office or laboratory sponsoring the research and development.)

DRDC – Valcartier Research Centre Defence Research and Development Canada 2459 route de la Bravoure Québec (Québec) G3J 1X5 Canada

9a. PROJECT OR GRANT NO. (If appropriate, the applicable 9b. CONTRACT NO. (If appropriate, the applicable number under research and development project or grant number under which which the document was written.) the document was written. Please specify whether project or grant.)

01bc - Emerging Air & Missile Defence Concept and Techniques

10a. DRDC PUBLICATION NUMBER (The official document number 10b. OTHER DOCUMENT NO(s). (Any other numbers which may be by which the document is identified by the originating assigned this document either by the originator or by the sponsor.) activity. This number must be unique to this document.)

DRDC-RDDC-2021-D025

11a. FUTURE DISTRIBUTION WITHIN CANADA (Approval for further dissemination of the document. Security classification must also be considered.)

Public release

11b. FUTURE DISTRIBUTION OUTSIDE CANADA (Approval for further dissemination of the document. Security classification must also be considered.)

12. KEYWORDS, DESCRIPTORS or IDENTIFIERS (Use semi-colon as a delimiter.)

Software Engineering; DevOps; Automation and Decision Support; Agile Software Development; Software; Software Development; Deployment; Naval Battle Management; Software Design and Architecture; Computer Networking; Network; Cloud Computing

13. ABSTRACT (When available in the document, the French version of the abstract must be included here.)

This Reference Document describes the continuous integration (CI) processes currently utilized for the development of software prototype Combat Resource Allocation Support (CORALS). The CI strategy is specifically tailored to support both the research project requirements and to meet the needs of its stakeholders and practitioners.

Several principles, techniques and best practices for automation of software development activities were selected, implemented and tested using tools and technologies, which are detailed in this document.

The positive impacts of this CI strategy are manifold: productivity of the development team has increased, stakeholders have greater access to the latest daily build of the prototype, the application is more stable, and software development process are now well documented, to name a few.

Ce document de référence décrit la stratégie d’intégration continue présentement utilisée pour le développement du prototype « Combat Resource Allocation Support (CORAL) ». Cette stratégie a été spécialement conçue pour supporter les besoins du projet de recherche ainsi que ceux de ses principaux intervenants.

Pour concevoir cette stratégie, des principes et techniques reconnus pour l’automatisation de plusieurs activités du développement logiciel et de la gestion du cycle de vie des applications ont été sélectionnées, implémentées et testées à l’aide d’une série d’outils et technologies aussi présentés en détail.

Les impacts positifs de la stratégie d’intégration continue sont nombreux: la productivité de l’équipe de développement a augmenté, les utilisateurs ont plus facilement accès à la dernière version journalière du prototype, l’application est plus stable, et les processus de développement logiciel sont mieux documentés, pour n’en nommer que quelques-uns.