Optimizing Vehicle Off-Road Assessment & Management at Canadian Manoeuver Training Centre Bohdan L. Kaluzny Materiel Group Operational Research / Acquisition Support Team

DRDC CORA TR 2007–023 December 2007

Defence R&D Centre for Operational Research and Analysis

Materiel Group Operational Research / AST Assistant Deputy Minister (Materiel)

National Défense Defence nationale

Optimizing Vehicle Off-Road Assessment & Management at Canadian Manoeuver Training Centre

Bohdan L. Kaluzny Materiel Group Operational Research / Acquisition Support Team

Defence R&D Canada – CORA Technical Report DRDC CORA TR 2007–023 December 2007 Principal Author

Original signed by Bohdan L. Kaluzny Bohdan L. Kaluzny

Approved by

Original signed by R.M.H. Burton R.M.H. Burton Acting Section Head (Joint and Common OR)

Approved for release by

Original signed by J. Tremblay J. Tremblay Chief Scientist

The information contained herein has been derived and determined through best practice and adher- ence to the highest levels of ethical, scientific and engineering investigative principles. The reported results, their interpretation, and any opinions expressed therein, remain those of the authors and do not represent, or otherwise reflect, any official opinion or position of DND or the Government of Canada.

c Her Majesty the Queen in Right of Canada as represented by the Minister of National Defence, 2007 c Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2007 Abstract

Canadian Forces Base/Area Support Unit Wainwright, is the home of the Canadian Ma- noeuver Training Centre, the national centre of excellence for collective training. All CF units that deploy on operations to Afghanistan come to Wainwright to train as formed groups prior to deploying. The base supplies the vehicles required for their training experience. The Canadian Forces Base Wainwright Maintenance Workshop Commander is responsible for en- suring operational fitness of the vehicle fleet and advises the Canadian Manoeuver Training Centre Training Authority on vehicle condition. Mid-exercise vehicle sampling (inspection) is employed to assess the vehicle off-road rate. The challenge is to optimize the process to minimize the re- sources (maintenance crew) required to give timely and sound feedback to the Training Authority, and to develop the capability to predict end-of-exercise vehicle off-road rates based on mid-exercise sampling.

This study develops tools that enable the Maintenance Workshop Commander to quantify and jus- tify the vehicle fleet readiness. A stratified random sampling model is proposed. The optimal sample size is calculated for the desired confidence and precision levels. Vehicle off-road infer- ence and projection models are developed using probability theory and simulation. Mathematical optimization is used to optimize resource allocation and vehicle readiness for subsequent exercises.

Résumé

On retrouve sur la base des Forces canadiennes/l’unité de soutien de secteur de Wainwright, en Alberta, le Centre canadien d’entraînement aux manœuvres, centre national d’excellence pour la formation collective. En fait, toutes les unités des Forces canadiennes qui sont déployées sur des opérations en Afghanistan viennent à la base des Forces canadiennes de Wainwright afin d’être en- traînées en groupe avant d’être déployées. La base fournit les véhicules requis pour leur formation. Le commandant de l’atelier de maintenance de la base des Forces canadiennes Wainwright est res- ponsable d’assurer la capacité opérationnelle de la flotte de véhicules et de donner des conseils à l’autorité de formation du Centre canadien d’entraînement aux manœuvres sur la condition des véhi- cules. On a recours à l’échantillonnage de véhicules (inspection) afin d’évaluer le taux de véhicules hors route. Le défi consiste à optimiser le processus afin de minimiser les ressources (personnel de maintenance) requises pour donner des commentaires opportuns et pertinents au Centre canadien d’entraînement aux manœuvres, et développer la capacité de prévoir le taux de véhicules hors routes à la fin de l’exercice en se basant sur l’échantillonnage fait en milieu d’exercice.

Cette étude développe les outils nécessaires afin de permettre au commandant de l’atelier de mainte- nance de quantifier et de justifier la disponibilité opérationnelle de la flotte de véhicules. On propose un modèle d’échantillonnage fait au hasard afin de déterminer la taille de l’échantillon optimale en se basant sur les niveaux souhaités de précision et de confiance. L’inférence de véhicules hors routes et les modèles de projection sont développés en utilisant la théorie de la probabilité et la simula- tion. L’optimisation mathématique est utilisée afin d’optimiser l’allocation de ressources et l’état de préparation des véhicules pour les exercices subséquents.

DRDC CORA TR 2007–023 i This page intentionally left blank.

ii DRDC CORA TR 2007–023 Executive summary

Optimizing Vehicle Off-Road Assessment & Management at Canadian Manoeuver Training Centre

Bohdan L. Kaluzny; DRDC CORA TR 2007–023; Defence R&D Canada – CORA; December 2007.

Background: Canadian Forces Base/Area Support Unit (CFB/ASU) Wainwright, Alberta is the home of the Canadian Manoeuver Training Centre (CMTC), the national centre of excellence for collective training. Wainwright has been a centre of army transformation of the past few years: in particular all CF units that deploy on operations to Afghanistan come to CFB Wainwright to train as formed groups prior to deploying. These are large-scale exercises. For example over 2,500 soldiers participated in Maple Guardian Exercise 2007 in May 2007. The soldiers arrive at Wainwright with their individual kit and weapon. The base supplies everything else required for their training experience, including vehicles. It is paramount that essential vehicles be in operational condition upon the arrival of the visiting units. The CFB Wainwright Maintenance Workshop Commander (MWC) is responsible for ensuring operational fitness of the vehicle fleet; that it is in operational condition for handover. The MWC needs to advise the CMTC Training Authority (TA) on the fleet’s condition to assess the impact on the readiness for subsequent exercises. The MWC needs to be able to quantify the impact and provide justification of his analysis. Based on this information, the TA directs the tempo of the exercise and maintenance activities. One of the tools available to the MWC to assess the vehicle off-road (VOR) rate is mid-exercise vehicle sampling where a subset of the vehicles is inspected.

The CFB/ASU Wainwright VOR challenge is to optimize the sampling process to minimize the resources (maintenance crew) required for the procedure while giving timely and sound feedback to CMTC, and to develop the capability to predict end-of-exercise VOR rates based on mid-exercise sampling.

Main Contributions: This study develops the tools to enable the MWC to quantify and justify the vehicle fleet’s operational condition. Using relevant historical data specific to Wainwright, the expected vehicular failure rates, top failed/repaired components per vehicle, and median time to repair (MTTR) are calculated. Four models are developed and implemented in Microsoft Excel to aid the Maintenance Workshop Commander optimize vehicle sampling and manage vehicle VOR rates. The first model, the Sample Size Calculator, calculates the number of vehicles that should be sampled during a sampling procedure in order to get statistically significant information. Stratified random sampling with either proportional, optimal cost or optimal precision level allocation is proposed. The second model, the VOR Inference Calculator, is used upon completion of vehicle sampling procedure. Based on specified error margins it provides the range of the estimated VOR rate of all the vehicles. The model is based on Bayesian analysis and the hypergeometric probability distribution. The VOR Projection Calculator implements a Monte Carlo simulation algorithm that takes into account both historical and observed failure rate data, estimated or observed mileage, and expected repair times to project the end-of-exercise VOR rate and required number of maintenance hours. Finally, the VOR Management Optimizer provides a tool for the MWC to assess and give

DRDC CORA TR 2007–023 iii backed advice to CMTC TA on VOR levels and to prioritize vehicle maintenance for subsequent training exercises.

Results: The VOR Sample Size Calculator, VOR Inference Calculator and VOR Projection Calcu- lator were successfully applied during Maple Guardian Exercise 2007 (MG0701). Unfortunately the Maintenance Workshop inspection team did not sample the number of vehicles as suggested by the VOR Sample Size Calculator. This lead to limited information being inputted into the VOR Inference Calculator. Similarly, vehicle odometer readings were not recorded and vehicle usage was only roughly estimated. Despite the limited or missing data, the models provided the MWC with scientifically-backed VOR rate estimates. These results have encouraged the MWC to enforce a more systematic sampling procedure for subsequent exercises.

Advice: The MWC is advised to experiment with the proposed models to measure their practical effectiveness. In doing so, the following should be respected: 1. to best utilize maintenance resources, the MWC should select (per vehicle) a subset of com- ponents that will be inspected during the sampling procedure. This selection should be based on the list of the top component failures and balanced against the inspection time per vehicle; 2. vehicle sampling should strive to attain the suggested sample size; 3. simple random sampling should be implemented. The vehicles to be inspected should be de- termined before reaching the forward operating base. The author suggests randomly choosing vehicles by their CF Registration (CFR) plate numbers. An unrepresentative sample might suggest that the fleet is in better (or worse) condition than reality; 4. vehicle odometer readings should be noted before the start of an exercise and recorded when inspected. The TF should aid in providing the estimated daily mileage of each vehicle type; 5. vehicle failures and mileage should be carefully recorded in Work Order and Fleet Man- agement System (FMS) databases. Historical data used in the proposed models should be routinely updated in order to remain consistent; and, 6. assessment of VOR levels should be based on the projected number of maintenance hours and the availability requirements for the next TF exercise. A VOR rate on its own does not provide enough information.

Future Work: It is advised that the CF considers implementation of the methodologies proposed at other training centres pending the results from experimentation of the models at CFB Wainwright.

iv DRDC CORA TR 2007–023 Sommaire

Optimizing Vehicle Off-Road Assessment & Management at Canadian Manoeuver Training Centre

Bohdan L. Kaluzny ; DRDC CORA TR 2007–023 ;R&Dpour la défense Canada – CARO ; décembre 2007.

Contexte : On retrouve sur la base des Forces canadiennes/l’unité de soutien de secteur (BFC/USS) de Wainwright, en Alberta, le Centre canadien d’entraînement aux manœuvres (CCEM), centre na- tional d’excellence pour la formation collective. Wainwright a été un centre de transformation de l’armée au cours des dernières années : En fait, toutes les unités des FC qui sont déployées sur des opérations en Afghanistan viennent à la BFC de Wainwright afin d’être entraînées en groupe avant d’être déployées. Ce sont là des exercices à grande échelle, par exemple plus de 2 500 soldats ont participé à l’exercice Maple Guardian 2007 en mai 2007. Les soldats arrivent à Wainwright avec leur équipement individuel et leur arme. La base fournit tout l’équipement nécessaire à leur entraî- nement, y compris les véhicules. Il est de la plus grande importance que les véhicules essentiels soient dans un état opérationnel à l’arrivée des unités qui visitent le CCEM. Le commandant de l’atelier de maintenance (MWC) de la BFC Wainwright est responsable d’assurer la capacité opéra- tionnelle de la flotte de véhicules ; il doit aussi voir à ce qu’elle soit en condition opérationnelle pour le transfert. Le MWC doit informer l’autorité de formation du CCEM sur la condition de la flotte afin d’évaluer l’impact sur l’état de préparation en vue des exercices subséquents. Le MWC a besoin d’être capable de quantifier l’impact et de justifier son analyse. En se basant sur cette information, l’autorité de formation dirige le tempo de l’exercice et des activités de maintenance. Un des outils que peut utiliser le MWC pour évaluer le taux de véhicule hors route (VOR) est l’échantillonnage de véhicule fait en milieu d’exercice afin de les inspecter.

Le défi de la BFC/USS Wainwright en matière de VOR est d’optimiser le processus afin de mi- nimiser les ressources (personnel de maintenance) requises par la procédure tout en donnant des commentaires opportuns et pertinents au CCEM, et de développer la capacité de prévoir le taux de véhicules hors routes à la fin de l’exercice en se basant sur l’échantillonnage fait en milieu d’exer- cice.

Contributions : Cette étude développe les outils qui permettent au MWC de quantifier et de justi- fier la condition opérationnelle de la flotte de véhicules. En utilisant les données historiques spéci- fiques à Wainwright, on calcule le taux prévu de bris de véhicule, les principales composantes bri- sées/réparées par véhicule, et le temps moyen de réparation prévu. On propose un modèle d’échan- tillonnage fait au hasard afin de déterminer la taille de l’échantillon optimale en se basant sur les niveaux souhaités de précision et de confiance. L’inférence de VOR et les modèles de projection sont développés en utilisant la théorie de la probabilité et la simulation. L’optimisation mathéma- tique est utilisée afin d’optimiser l’allocation de ressources et l’état de préparation des véhicules pour les exercices subséquents. Les modèles sont mis en oeuvre dans un fichier Microsoft Excel et ont déjà été utilisés lors de l’exercice Maple Guardian 2007 (MG0701).

DRDC CORA TR 2007–023 v Résultats : Les modèles sont mis en oeuvre dans un fichier Microsoft Excel et ont été exécutés avec succès lors de l’exercice Maple Guardian 2007 (MG0701). Malheureusement l’équipe d’inspection d’atelier de maintenance n’a pas prélevé le nombre de véhicules comme suggéré par la calculatrice de taille de l’échantillon de VOR. Ceci mènent à l’information limitée étant entrée dans la calcula- trice d’inférence de VOR. De même, des lectures d’odomètre de véhicule n’ont pas été enregistrées et l’utilisation de véhicule a été rudement estimée. En dépit des données limitées ou absentes, les modèles pouvaient fournir au MWC des évaluations scientifique-soutenues de taux de VOR. Ces résultats ont encouragé le MWC à imposer un procédé d’échantillonnage plus systématique pour d’autres exercices.

Conseil : Le MWC est conseillé pour expérimenter avec les modèles proposés pour mesurer leur efficacité pratique. Ce qui suit devrait être respecté : 1. pour utiliser mieux des ressources d’entretien, le MWC devrait choisir (par véhicule) un sous- ensemble de composants qui seront inspectés pendant le procédé d’échantillonnage. Ce choix devrait être basé sur la liste des échecs composants supérieurs et être équilibré contre le temps d’inspection par véhicule ; 2. l’échantillonnage de véhicule devrait atteindre la dimension de l’échantillon suggérée ; 3. les véhicules à inspecter devraient être déterminés avant d’atteindre la base de fonctionne- ment. L’auteur suggère de choisir les véhicules par leurs nombres de plat de l’enregistrement de FC. Un échantillon peu représentatif pourrait suggérer que la flotte soit en meilleure (ou plus mauvaise) condition que la réalité ; 4. des lectures d’odomètre de véhicule devraient être notées avant le début d’un exercice et être enregistrées une fois inspectées. La force de formation devrait faciliter en fournissant le kilomètrage quotidien estimé de chaque type de véhicule ; 5. des échecs et le kilomètrage de véhicule devraient être soigneusement enregistrés dans des bases de données. Des données historiques utilisées dans les modèles proposés devraient être par habitude mises à jour afin de rester conformées ; et 6. l’évaluation des niveaux de VOR devrait être basée sur le nombre projeté d’heures d’entretien et les conditions de disponibilité pour le prochain exercice.

Travail Futur : Il est recommandé que les FC envisagent la mise en oeuvre des méthodologies proposées à d’autres centres de formation en fonction des résultats obtenus sur les modèles mis à l’essai à la BFC Wainwright.

vi DRDC CORA TR 2007–023 Table of contents

Abstract ...... i

Résumé ...... i

Executive summary ...... iii

Sommaire ...... v

Table of contents ...... vii

List of tables ...... ix

List of figures ...... ix

Acknowledgements ...... xi

1 Introduction ...... 1

1.1 Background ...... 1

1.2 Objective ...... 3

1.3 Scope ...... 4

1.4 Organization ...... 4

2 Data ...... 5

2.1 Failure Rates and Maintenance Time ...... 5

2.2 Top Component Failures ...... 7

3 Sample Size Determination ...... 10

3.1 Key Criteria and Probability Distribution Models ...... 10

3.2 Stratified Sampling Model ...... 12

3.2.1 Stratified Sampling for Proportions ...... 13

3.2.2 Stratified Random Sampling: Optimum Allocation ...... 14

4 VOR Inference ...... 16

DRDC CORA TR 2007–023 vii 5 End-of-Exercise VOR Projection ...... 19

5.1 Stochastic Parameters ...... 19

5.2 Monte Carlo Algorithm ...... 20

6 VOR Management ...... 21

7 Model Implementation ...... 23

7.1 Sample Size Calculator ...... 23

7.2 VOR Inference Calculator ...... 24

7.3 VOR Projection Calculator ...... 24

7.4 VOR Management Optimizer ...... 26

8 Application: Exercise MAPLE GUARDIAN 2007 ...... 29

8.1 MG0701 Vehicle Sampling ...... 29

8.2 MG0701 VOR Inference ...... 30

8.3 MG0701 VOR Projection ...... 30

8.4 MG0701 VOR Management ...... 33

9 Conclusions and Advice ...... 35

9.1 Summary ...... 35

9.2 Advice ...... 35

9.3 Future Work ...... 36

References ...... 37

Annex A: Stratified Random Sampling Derivations ...... 39

A.1 Sample Estimate and Variance ...... 39

A.2 Derivation of Sample Sizes ...... 44

List of symbols/abbreviations/acronyms/initialisms ...... 49

Distribution List ...... 51

viii DRDC CORA TR 2007–023 List of tables

Table 1: Maintenance Results from Work Order Data (2003-2006) ...... 5

Table 2: Failure Results from FMS and Work Order Data (2003-2006) ...... 5

Table 3: Top Component Failures per Vehicle (from Work Order Data 2003-2006) . . . 9

Table 4: Suggested & Actual Sample Sizes for MG0701 ...... 30

Table 5: VOR Inference for MG0701 ...... 32

Table 6: Estimated Daily Vehicle Usage (Mileage) Data for MG0701 ...... 32

Table 7: MG0701 VOR Projection Calculator Statistics on the Number of VOR Vehicles 33

Table 8: MG0701 VOR Rates: Actual vs. Projected ...... 33

Table 9: Example VOR Projection Calculator Results ...... 34

Table 10: Example VOR Management Optimizer Results ...... 34

List of figures

Figure 1: CFB Wainwright ...... 1

Figure 2: A Fleet Vehicles ...... 2

Figure 3: B Fleet Vehicles ...... 2

Figure 4: Bathtub Curve ...... 7

Figure 5: VOR Inference Examples ...... 17

Figure 6: Failure Rate (Gamma) Distributions for Selected Vehicles at CFB Wainwright . 20

Figure 7: Sample Size Calculator ...... 23

Figure 8: VOR Inference Calculator ...... 24

Figure 9: VOR Inference: Worst-Case and Best-Case Confidence Interval Errors ..... 24

Figure 10: VOR Projection Calculator: Vehicle Usage Input ...... 25

Figure 11: VOR Projection Calculator: Exercise and Maintenance Input ...... 25

DRDC CORA TR 2007–023 ix Figure 12: VOR Projection Calculator: Quick Results ...... 26

Figure 13: VOR Projection Calculator: Detailed Results ...... 27

Figure 14: VOR Management Optimizer: Input ...... 27

Figure 15: VOR Management Optimizer: Detailed Results ...... 27

Figure 16: VOR Management Optimizer: Optimization ...... 28

Figure 17: Vehicle Use During Exercise Maple Guardian 2007 ...... 29

Figure 18: MG0701 VOR Inference: A Fleet VOR Probability Distributions ...... 31

Figure 19: MG0701 VOR Inference: B Fleet VOR Probability Distributions ...... 31

Figure 20: MG0701 End-Of-Exercise Projection Histogram of the Number of VOR Vehicles 32

x DRDC CORA TR 2007–023 Acknowledgements

The author thanks Dr. Geoffrey Pond for his insight into CFB Wainwright’s VOR challenge follow- ing his visit to CFB Wainwright in the fall of 2006 [1], [2]. His observations and initial sampling model provided a sound starting point to build on. Mr. Alan Hill coded a preliminary version of the VOR Inference Calculator in Visual Basic which the author used as a spring board to the current model. Mr. Ed Emond and Mr. David Shaw provided invaluable probability and statistics knowl- edge which guided the statistics research immensely. Mr. Andy Gallant, DMIS (Director Materiel Information Systems), helped extract CFB Wainwright specific work order data from PlannExpert. Maj. Adrian Erkelens provided contacts to access essential databases. Finally, the author would like to thank Maj. Kevin Fitzpatrick, Maintenance Workshop Commander at CFB/ASU Wainwright, for his timely feedback and very helpful communications. The models proposed here spawned from his ideas and this operational research (OR) study is the result of his initiative.

DRDC CORA TR 2007–023 xi This page intentionally left blank.

xii DRDC CORA TR 2007–023 1 Introduction 1.1 Background

Canadian Forces Base/Area Support Unit (CFB/ASU) Wainwright, Alberta is the home of the Cana- dian Manoeuver Training Centre (CMTC), the national centre of excellence for collective training. Wainwright has been a centre of army transformation of the past few years. In particular, all CF units that deploy on operations to Afghanistan come to CFB Wainwright to train as formed groups prior to deploying. These are large-scale exercises. For example, over 2,500 soldiers participated in Maple Guardian Exercise 2007 MG0701 in May 2007. The soldiers arrive at Wainwright with their individual kit and weapon. The base supplies everything else required for their training experience, including vehicles. It is essential that vehicles be in operational condition upon the arrival of the visiting units. On two occasions each year, a second training exercise commences less than two weeks after the end of the previous one, and inherits the vehicle fleet left behind. The CFB Wain- wright Maintenance Workshop Commander (MWC) is responsible for ensuring operational fitness of the vehicle fleet, and that the fleet is in operational condition for handover. The MWC advises the CMTC Training Authority (TA) on vehicle readiness, before and during exercises, especially when the condition of the fleet is such that it will impact the readiness of the following group. The MWC needs to be able to quantify the impact and provide justification of his analysis. If need be, the TA can then direct the tempo of the current exercise or direct more maintenance activities. Direc- torate Land Equipment Program Systems (DLEPS) requested Operational Research (OR) support from the Directorate Material Group Operational Research (DMGOR) to provide the means for the MWC to quantify and justify the fleet readiness [3].

Figure 1: Gun crew from 1 Royal Canadian Horse Artillery rides in the back of an Medium Logis- tics Wheeled Vehicle (MLVW) in the Wainwright training area.

DRDC CORA TR 2007–023 1 CFB Wainwright’s maintenance base is responsible for maintaining approximately 1,500 vehicles. There are currently 25 vehicle technicians to perform general and mechanical maintenance. In addition, there are specialized technicians, namely weapon technicians; electronic optronic (EO) technicians; material technicians; and land communications and information systems (LCIS) tech- nicians. The number and type of vehicles that are lent out to visiting Training Forces (TF) varies. Recent exercises have demanded upwards of three to four hundred vehicles. The MWC is mainly concerned about the operational condition of nine vehicle types that form the majority of the ve- hicles usually lent out. These vehicles are grouped into two fleet classes: A fleet and B fleet. The A fleet consists of Bison Armoured Vehicles (BISON), Recce Coyotes (LAV COYOTE), Light Ar- moured Vehicle III Armoured Personnel Carriers (LAV III APC), Leopard tanks (LEOPARD), and Tracked Light Armoured Vehicles (TLAV). At the end of 2006 there were 164 A fleet vehicles at CFB Wainwright. The B fleet consists of non-armoured vehicles: Light Utility Vehicle Wheeled (LUVW) (also known as G-Wagon), Heavy Logistic Vehicle Wheeled (HLVW), Medium Logistic Vehicle Wheeled (MLVW), and Light Support Vehicle Wheeled (LSVW). A total of 550 B fleet vehicles resided at CFB Wainwright at the end of 2006. Figures 2 and 3 show the different vehicle types.

Figure 2: A Fleet Vehicles (top-left to bottom-right): LAV III, LEOPARD, COYOTE, TLAV, and BISON

Figure 3: B Fleet Vehicles (left to right): MLVW, LUVW, LSVW, and HLVW

A subset of the vehicle fleets is handed over to the TF at the start of the exercise. The vehicles are expected to operate for three to five weeks in difficult field conditions. During the exercise the visiting TF is responsible for first-line maintenance of the vehicles. If a vehicle fails or is deemed unsafe for use, it is tagged as Vehicle Off-Road (VOR). A vehicle can be grounded due to: (1) the vehicle or its equipment is deemed unsafe for use; (2) further use of the vehicle may cause damage; or

2 DRDC CORA TR 2007–023 (3) a mission essential system is no longer functioning. The factors influencing VOR rates include: – TF emphasis in operator maintenance; – operator care; – weather/climate; – conditions of use (on/off road, etc.); – exercise tempo; – capabilities of TF maintenance organization; – availability of spare parts; and – user culture (“rented fleet” mentality); and the analogous preventative measures for controlling VOR are – command endorsement of fleet care; – mandatory scheduled operator inspections and fault reporting; – daily spare parts replenishment; – CMTC assessment of sustainment and control of exercise tempo; and – TF and ASU maintenance crew collaboration. In order to assess the VOR rate, the MWC implemented vehicle sampling, mid-exercise inspections where part of the ASU Wainwright maintenance crew travels to the forward operating base being used by the TF to examine the state of the vehicles. The objective is to survey the status of as many vehicles as possible, subject to the availability of the vehicles as some may be in field use. This sampling is not random, vehicles are inspected only if they are available.

Vehicle technicians check steering, lights, high beams, signals, brake lights, horn, ramp operation, engine start, fluid levels, and leaks, mainly by visual inspection. The inspection is similar to the recommended preventative maintenance [4]. The specialized technicians perform similar inspec- tions on corresponding vehicle parts. Vehicle technicians require around 15 minutes to inspect a single vehicle. At the end of the exercise, each VOR vehicle is queued for repair, and maintenance. Work Orders (WO) are opened. Vehicle sampling provides CMTC with a snapshot of the state of the fleet, but also acts as a strong motivator for sustainment and fosters collaboration between TF and Wainwright maintenance crews. The drawback of vehicle sampling is that the ASU Wainwright maintenance crew is removed from their primary task of vehicle maintenance and repair. Sampling can be time-consuming and an inefficient use of limited specialized resources as, alternatively, tech- nicians could be performing corrective maintenance on known VOR vehicles. The Wainwright VOR challenge is twofold: 1. optimize the sampling process to minimize the resources (maintenance crew) required for the procedure while giving sound feedback to CMTC; and 2. develop the capability to predict end-of-exercise VOR rates based on mid-exercise sampling. The latter item is of particular importance when back-to-back training course serials are scheduled at CMTC.

1.2 Objective

The objective of this study is to tackle the CFB/ASU Wainwright VOR challenge. The aim is to provide the MWC with a scientific tool set to use during and post training exercises enabling

DRDC CORA TR 2007–023 3 him/her to provide sound advice to the Training Authority on the status of vehicles. In particular, the objective was to 1. analyze available historical vehicle data from CFB Wainwright to determine vehicle failure rates and compile a list of the most-likely vehicle sub-components that may fail; 2. apply statistical theory to determine the optimal number of vehicles per type to inspect based on inspection time and desired confidence and precision levels; 3. apply probability theory to correctly infer the sampling results to the entire vehicle popula- tion; 4. develop a simulation model to accurately project the number of VOR vehicles to expect at the end of an exercise; and 5. develop an optimization model to maximize vehicle readiness for subsequent exercises, pri- oritize vehicle maintenance, and optimize resource allocation.

1.3 Scope

This study examined vehicle usage and failure data specific to CFB Wainwright from 2003 to 2006 (calendar years). It was not within the scope of the study to fully characterize the failure rate curve of each of the vehicles. Instead, vehicle failure rates were assumed to be constant. The projection models developed do not directly take into account adverse climate/weather conditions, operator abuse, availability of spare parts, or other factors for which historical data was not readily available. The sampling models developed are based on simple random sampling which requires effort to implement in practice. This study does not account for potential biases in the sampling procedure.

1.4 Organization

In the following section we extract relevant historical data from Assistant Deputy Minister Materiel (ADM(MAT)) databases. In particular, for each vehicle type, we calculate the expected failure rate and compile a list of the top failed/repaired components. The median time to repair (MTTR) per vehicle type is also calculated. In Section 3 we show how to determine the right sample size based on desired confidence and precision levels. Stratified random sampling is proposed with three different allocation options depending on the situation. In the subsequent section, the theory of sampling inference is presented to correctly generalize the sampling results to the entire fleet. In Section 5, a Monte Carlo simulation algorithm is developed which utilizes both historical data and sampling results to output a distribution of the possible VOR outcomes at the end of a training exercise. An Integer Linear Programming (ILP) model is presented in Section 6 that optimizes resource allocation and vehicle readiness for subsequent exercises. The optimization model compiles the projected VOR rates and MTTR to give an assessment of the number of technician hours that will be required. The Microsoft Excel implementation of the four models is described in Section 7 and subsequently applied in Section 8 on data from Maple Guardian Exercise 2007 (MG0701). The final section of this report highlights the conclusions and recommendations. The reader solely interested in the application of the models can skip the technical parts of sections 3 through 6 without loss of flow.

4 DRDC CORA TR 2007–023 2 Data 2.1 Failure Rates and Maintenance Time

Data for VOR rates and maintenance of vehicles at CFB Wainwright is available through work orders (WO) and the Fleet Management System (FMS) [5]. The WO for the calendar years 2003 - 2006 captured all maintenance performed on vehicles at CFB Wainwright. Each WO details the vehicle, type of maintenance (corrective or preventative), labour hours required, parts cost, and NSN (NATO Stock Number) name among other information. The number of times vehicles were VOR in 2003-2006 and the number of hours spent on maintenance actions was computed from this data. FMS captures the kilometer (km) usage data of vehicles. From this data the monthly usage of each vehicle at CFB Wainwright from 1 January 2003 to 31 December 2006 was extracted. Tables 1 and 2 present the compiled data. In the first table, for each vehicle type, the number of WO’s and corresponding labour time statistics are presented (total, median, maximum (Max) and minimum (Min) hours (Hrs)). In the second table, usage data from FMS is shown and is coupled with WO occurrences to calculate mean kilometers between failures (MKBFs) as each corrective maintenance WO to corresponds to a vehicle failure.

Table 1: Maintenance Results from Work Order Data (2003-2006) Vehicle Total Hrs Occurrences Median Hrs Max Hrs Min Hrs BISON 1393.3 86 19.75 106 1 LAV COYOTE 5472.9 133 36.5 158 0 LAV III APC 14093.5 469 24.25 151 0 LEOPARD 8139.6 70 81.75 429 0 TLAV 1869.1 48 21 223.5 3.5 LUVW 17916.8 323 8.75 325.5 0 MLVW 28461 1074 15.5 328 0.5 LSVW 13685.5 762 11.5 150 0 HLVW 14335.5 409 18 328.5 0

Table 2: Failure Results from FMS and Work Order Data (2003-2006) Vehicle Kms Occurrences Failures/Kms MKBF BISON 78942 86 1.09E-03 918 LAV COYOTE 107218 133 1.24E-03 806 LAV III APC 341789 469 1.37E-03 729 LEOPARD 5358 70 1.31E-02 77 TLAV 18877 48 2.54E-03 393 LUVW 1245771 323 2.59E-04 3857 MLVW 831192 1074 1.29E-03 774 LSVW 904974 762 8.42E-04 1188 HLVW 737463 409 5.55E-04 1803

The accuracy of both data sources used can be questioned as the raw data is manually inputted by users. The kms traveled are recorded on a daily basis by the driver assigned to the vehicle and on

DRDC CORA TR 2007–023 5 a monthly basis by the unit/section. The WO data is inputted by the maintainer opening/closing the WO. There is no warning or editing system in place to detect/correct faulty input [6]. There are possible input mistakes. For example, it is highly unlikely that a single MLVW vehicle (CFR 58314) was driven 520,340 kms in twelve months (1 January 2006 to 31 December 2006) as indicated in FMS. Work order WO-729-0018420 opened on 13 September 2004 and closed on 19 April 2005 also stands out as it lists that 429 labour hours were required (over 60 seven-hour person days). To limit the effect of erroneous outliers, we take the median WO labour hours to be the MTTR. We compare the calculated vehicle MKBFs to MKBF values used in recent studies:

– In a study estimating the reliability of the MLVW, Desmier [7] examined failure and usage data from 1999 - 2003. The resulting yearly MKBFs were 3585,1743,1288,978, and 735 kms respec- tively, indicating a downward trend in the reliability of the MLVW. The MKBF of the MLVW at CFB Wainwright for 2003-2006 is calculated as 774 kms. – In a study simulating first-line replenishment using Medium Support Vehicle System (MSVS) and Heavy Logistic Vehicle Wheeled (HLVW) vehicles [8], a MKBF of 2844 was inputted by Directorate of Land Requirements [9]. The MKBF of the HLVW at CFB Wainwright for 2003- 2006 is calculated as 1803 kms.

The two calculated MKBFs that stand out are for the Leopard (77 kms) and the LUVW (3857 kms). The low Leopard MKBF may be due to failures being related more to usage hours and rounds fired (turret use) as the tank may sit stationary for long periods during training exercises. Currently only kms are recorded as usage data. The LUVW is a new vehicle that entered into service in 2004 so it is expected to have a better reliability due to engineering advancements.

It should be noted that before the creation of CMTC in 2002, the vehicles filled various roles within the CF and hence have different historical usage profiles. Consider the LAV III with Canadian Forces Registration (CFR) no. 30220 with an odometer reading of 150,699 and 26,626 hours of use in comparison to the LAV III with CFR no. 40278 with only 5,157 kms and 633 hours of use [1]. To further complicate matters, some vehicles are unused for months before being put into demanding service. At the start of Maple Guardian Exercise 2006 (MG0603) it was determined that 40% of the A fleet vehicles were VOR within days of the handover to the visiting TF.

It is also necessary to understand the evolution of the failure rate. Reliability theory depicts the failure rate of mechanical systems over time as a “Bathtub” curve (Figure 4) consisting of three periods: Early Failure Period where the system fails early in its life; Intrinsic Failure Period where a relatively constant failure rate is observed; and Wear-out Failure Period where the system is nearing the end of its service life and the failure rate is increasing. (Figure 4 graphs an example failure rate h(t) as a function of time t.)

It has been shown in [7] that the MLVW is in its wear-out failure period as its failure rate is getting progressively worse. It is not in the scope of this study to determine the failure rate over time (bathtub curve) of the vehicles at CFB Wainwright. It is difficult to accurately determine the current failure rate of the vehicles, but the extracted historical data specific to CFB Wainwright is assumed

6 DRDC CORA TR 2007–023 Figure 4: Bathtub Curve to be a good approximation the vehicles’ MKBFs. These MKBFs will be fitted with appropriate probability distributions before being used (Section 5). As the bathtub curve shows, the failure rates will change over time and periodic data updates are required in any model that utilizes them.

2.2 Top Component Failures

One of the main concerns of Maintenance Workshop Commander at CFB/ASU Wainwright is the use of resources (maintainers) to perform vehicle sampling [10]. Sampling is done via visual in- spection. Maintainers that are used for the sampling procedure are removed from their primary task of restoring VOR vehicles into operational state. It is desirable to minimize the number of maintainer hours required for the sampling process. We tackle this problem in two ways. In the next section we use sampling theory to determine the minimal number of vehicles that need to be sampled to gain the desired confidence in the collected data. A complementary way to reduce the amount of maintainer hours spent on sampling is to concentrate on the components per vehicle type that are most likely to need servicing based on historical data. The idea is to streamline sampling efforts by inspecting specific sub-systems for given vehicle types that account for the largest per- centage of failures. For each vehicle type, we processed the 2003-2006 work orders to extract the top component failures. A subset of the WOs have an “NSN NAME” field in which the components repaired/replaced are listed. In Table 3 the top twenty components serviced for each vehicle type are presented. For each vehicle we list components serviced sorted by the percentage of WO’s they appear in. We list the NSN Name, the number of WOs it appears in, and the percentage of WOs it appears in. (Unfortunately the WOs do not record the number of labour hours or the cost per NSN Name.)

Based on this data, we observe that for the Bison and Coyote vehicles, the most likely items to be serviced are seats & belts, steering, brakes, hoses, and light & lamp components. The most likely LAV III components to be serviced comprise of the same list with the addition of the battery. For the Leopard tanks, it would be useful to inspect the vehicle’s gaskets as they comprise 81% of the WOs - in addition to filter elements, hoses and springs. The WO data available for the LUVW indicate that the wheels & tires and circuit breaker/card merit inspection time. Springs, brackets and the

DRDC CORA TR 2007–023 7 mount assembly connector are the top components likely to fail in the TLAV. The aging MLVW fleet requires close inspection to its wheels & tires, circuit breaker/card, and battery. The MLVW also sustains significant wear/damage to the windows and mirrors. The LSVW and HLVW should be checked for servicing requirements relating to filter elements and seals, in addition to lights & lamps, brakes and fuel system on the LSVW, windows & mirrors, gaskets and wheels & tires on the HLVW.

The following table of top component failures is specific to CFB Wainwright and may not reflect the data of other centres or CF deployments.

8 DRDC CORA TR 2007–023 Table 3: Top Component Failures per Vehicle (from Work Order Data 2003-2006)

Bison (52 WO) Coyote (504 WO) LAV III (298 WO) SEATS, SAFETY BELTS 19 36.5% SEATS, SAFETY BELTS 33 41.8% BRAKES 91 30.5% STEERING 12 23.1% BRAKES 31 39.2% HOSES 72 24.2% BRAKES 10 19.2% HOSES 24 30.4% STEERING 66 22.1% RCCR R2007–023 TR CORA DRDC HOSES 9 17.3% LIGHTS & LAMPS 19 24.1% BATTERY 47 15.8% GASKETS 7 13.5% SEALS 18 22.8% LIGHTS & LAMPS 42 14.1% LIGHTS & LAMPS 7 13.5% STEERING 14 17.7% FIRE EXTINGUISHER / WARNING 41 13.8% SUSPENSION & STRUTS 7 13.5% FILTER ELEMENTS 13 16.5% SEATS (incl.SAFETY BELTS) 41 13.8% GUN / WEAPON MOUNT 6 11.5% SPRINGS 13 16.5% CABLE ASSEMBLY,SPECIAL PURPOSE,ELECTRICA 39 13.1% SWITCHES 6 11.5% TRANSMISSION & DRIVE 12 15.2% SEALS 35 11.7% CIRCUIT BREAKER / CARD 5 9.6% CABLE ASSEMBLY,SPECIAL PURPOSE,ELECTRICA 11 13.9% ROTOCHAMBER 31 10.4% FILTER ELEMENTS 5 9.6% GASKETS 11 13.9% VALVES 31 10.4% VALVES 5 9.6% SHOCK ABSORBER 11 13.9% BEARING 28 9.4% WINDOWS, GLASS, MIRRORS 5 9.6% BATTERY 10 12.7% SPRINGS 24 8.1% BATTERY 4 7.7% WINDSHIELD WIPER WASHER 9 11.4% WHEELS AND TIRES 21 7.0% INSUL.BLANKET,THERMAL 4 7.7% VALVES 9 11.4% GASKETS 21 7.0% PLUG,PIPE 4 7.7% BEARING 6 7.6% O-RING 20 6.7% SCREW,CAP,HEXAGON HEAD 4 7.7% FIRE EXTINGUISHER / WARNING 6 7.6% VENTILATOR,AIR CIRCULATING 20 6.7% SHOCK ABSORBER 4 7.7% PERISCOPE,ARMORED VEHICLE 6 7.6% FILTER ELEMENTS 19 6.4% SPRINGS 4 7.7% WINDOWS, GLASS, MIRRORS 6 7.6% SWITCHES 19 6.4% BUSHINGS 3 5.8% BUSHINGS 5 6.3% CARTRIDGE,DEHYDRATOR 18 6.0%

Leopard (42 WO) LUVW (19 WO) TLAV (25 WO) GASKETS 34 81.0% WHEELS AND TIRES 8 42.1% SPRINGS 7 28.0% FILTER ELEMENTS 23 54.8% CIRCUIT BREAKER / CARD 3 15.8% BRACKET,VEHICULAR COMPONENTS 6 24.0% HOSES 21 50.0% WINDOWS, GLASS, MIRRORS 3 15.8% CONNECTOR MOUNT ASSEMBLY 5 20.0% SPRINGS 17 40.5% BATTERY 2 10.5% FILTER ELEMENTS 5 20.0% BEARING 13 31.0% CABLE ASSEMBLY,SPECIAL PURPOSE,ELECTRICAL 1 5.3% WIRING HARNESS 5 20.0% O-RING 13 31.0% CONTROL ASSEMBLY,PUSH-PULL 1 5.3% CLAMP,LOOP 4 16.0% TRACK 13 31.0% TRANSMITTER 1 5.3% ELBOW,TUBE 4 16.0% WHEELS AND TIRES 12 28.6% SWITCHES 1 5.3% WINDSHIELD WIPER, WASHER 3 12.0% COVER,PROTECTIVE,RUBBERIZED 12 28.6% SEATS, SAFETY BELTS 3 12.0% FIRE EXTINGUISHER / WARNING 11 26.2% BLOCK,VISION,FRAMEL 3 12.0% BRAKES 11 26.2% BOLT,ADJUSTO-FIT 3 12.0% EXHAUST SYSTEM 11 26.2% COUPLING,CLAMP,GROOVED 3 12.0% SEALS 11 26.2% GASKETS 3 12.0% BATTERY 10 23.8% O-RING 3 12.0% CABLE ASSEMBLY,SPECIAL PURPOSE,ELECTRICAL 10 23.8% HOSES 3 12.0% WASHER,LOCK 9 21.4% BATTERY 2 8.0% ENGINE (GENERAL) 8 19.0% BOLT,ELEVATION MECH 2 8.0% CENTER GUIDE 7 16.7% PERISCOPE,ARMORED VEHICLE 2 8.0% NUT,PLAIN,HEXAGON 7 16.7% PUMP,ROTARY 2 8.0% SCREW,CAP,HEXAGON HEAD 7 16.7% RETAINER,PACKING 2 8.0%

MLVW (688 WO) LSVW (435 WO) HLVW (203 WO) WHEELS AND TIRES 343 49.9% FILTER ELEMENTS 112 25.7% FILTER ELEMENTS 105 51.7% CIRCUIT BREAKER / CARD 266 38.7% SEALS 106 24.4% SEALS 57 28.1% WINDOWS, GLASS, MIRRORS 223 32.4% LIGHTS & LAMPS 92 21.1% WINDOWS, GLASS, MIRRORS 38 18.7% BATTERY 165 24.0% BRAKES 79 18.2% GASKETS 34 16.7% CABLE ASSEMBLY,SPECIAL PURPOSE,ELECTRICA 152 22.1% FUEL SYSTEM (PUMP, INJECTOR, LINE, TANK) 79 18.2% WHEELS AND TIRES 32 15.8% CONTROL ASSEMBLY,PUSH-PULL 140 20.3% SEATS (incl.SAFETY BELTS) 74 17.0% EXHAUST SYSTEM 29 14.3% TRANSMITTER 126 18.3% STRAINER ELEMENT,SEDIMENT 62 14.3% HOSES 29 14.3% SWITCHES 96 14.0% BATTERY 61 14.0% LIGHTS & LAMPS 28 13.8% LIGHTS & LAMPS 92 13.4% SWITCHES 50 11.5% SWITCHES 28 13.8% BEARING 90 13.1% GASKETS 47 10.8% SEATS, SAFETY BELTS 27 13.3% WINDSHIELD WIPER, WASHER 87 12.6% WINDOWS, GLASS, MIRRORS 44 10.1% O-RING 25 12.3% SWITCHES 79 11.5% BEARING 43 9.9% BRAKES 23 11.3%

9 WINDOWS, GLASS, MIRRORS 69 10.0% WHEELS AND TIRES 39 9.0% GUARD,SPLASH,VEHICULAR 23 11.3% STEERING 62 9.0% TRANSMISSION & DRIVE 38 8.7% TRANSMISSION & DRIVE 23 11.3% BATTERY 48 7.0% TRANSMITTER 38 8.7% DESICC.CONT.,DEHUMID. 22 10.8% WASHER,KEY 48 7.0% EXHAUST SYSTEM 37 8.5% WIRE ROPE ASSEMBLY 20 9.9% CONTROL ASSEMBLY,PUSH-PULL 46 6.7% SPRINGS 37 8.5% SPRINGS 17 8.4% INDICATORS 44 6.4% HOSES 33 7.6% NUT,SELF-LOCKING,HEX 16 7.9% DECAL 36 5.2% MOUNT,RESILIENT,GENERAL PURPOSE 29 6.7% WINDSHIELD WIPER WASHER 15 7.4% DOORS 33 4.8% PACKING,PREFORMED 28 6.4% CYL.ASSMBLY,ACTUATING,LINEAR 14 6.9% 3 Sample Size Determination

At given points during a TF exercise, the MWC of CFB Wainwright sends out a team to inspect a subset of the vehicles with the aim of estimating the overall vehicle VOR rate of the fleet. How many vehicles should be inspected in order to get an appropriate degree of confidence in the result? This problem is further compounded by the limitations of resources and time available for the inspection. A technically capable maintenance crew is required to properly sample the fleet, removing this skill from the labour pool available for other vehicle maintenance duties. Additionally, the set of vehicles available for sampling on a particular day is dependent on the TF vehicle in use that day. Optimizing the sample size required would minimize the number of ASU maintainer hours required for the procedure.

Determination of a good sample size is non-trivial, but methods for sample selection and size esti- mation have been developed in sampling theory that provide, at the lowest possible cost, estimates that meet the desired precision. In order to use these methods, a very important assumption must be made, that of random sampling: the sampling procedure where the selection of the sample is governed by the laws of chance. The importance of this assumption cannot be overstated. The cal- culations in this section hinge on this assumption. Ensuring randomness is difficult to implement in practice. In particular, the maintenance team in charge of vehicle sampling may only have access to parked vehicles that are not used in the field for exercise on that particular day. Furthermore, parked vehicles may be parked solely because they are VOR. In [2] it was noted that on some occasions vehicles are sampled from a single company. This is hardly random as different companies will have different usages of the vehicles. As an example, consider the spectrum of usage of command- post LAVs at headquarters, that tend to sit at one place for the whole exercise, in comparison to the infantry LAV III troop carrier. Ideally, the vehicle sample sets should be selected at random, perhaps based on CF registration (CFR) numbers, before the inspection team leaves for the forward operating base. Assuming simple random sampling greatly simplifies the sampling theory. It was not within the scope of this study to model the potential sampling biases.

In what follows, we explain the application of sampling theory that leads to the determination of the optimal sample size. The interested reader is referred to Cochran [11] for further explanation of the theory presented in this section. The mathematical results presented in this section are well- established in stratified sampling theory and should not be attributed to the author.

3.1 Key Criteria and Probability Distribution Models

Two key criteria must be specified to determine an appropriate sample size: precision and confidence levels. The level of precision is the range, specified in percentage points, in which the true value of the population is estimated to be. The level of confidence, also specified in percentage points, indicates the probability that the results from the sampling do indeed fall within the precision level specified. Typically, the confidence level is set at 95% and the precision level is set to 5 − 10%. So, for example, the MWC at CFB Wainwright might want to be 95% confident that X% ± 5% of the vehicles are VOR where X% is the percentage of VOR vehicles in the sample 1. A third criteria,

1. Upon repeated sampling, 95% of the confidence intervals (X% ± 5%) would contain the true (unknown) value of the VOR proportion.

10 DRDC CORA TR 2007–023 the degree of variability of the population refers to the distribution of VOR vehicles in the entire population. If we know that, for example, 20% of the vehicles in the population are not VOR, then we can use this information to decrease the sample size required to obtain the desired confidence and precision levels. The degree of variability is symmetric, and a value of 50% leads to the most conservative (largest) sample size.

For each fleet, A and B, we wish to estimate the total number, or proportion, of vehicles that are VOR. Every vehicle falls into one of two distinct classes: VOR or not. The notation we use is as follows: N Number of vehicles in fleet (A or B) n Number of vehicles in sample A Number of VOR vehicles in population a Number of VOR vehicles in sample P = A/N Proportion of VOR units in population p = a/n Sample estimate of P q = 1 − p.

In this case, the appropriate distribution for a is the hypergeometric, a discrete probability distribu- tion that describes the number of successes in a sequence of draws from a finite population without replacement using simple random sampling. This describes the case at CFB Wainwright as the vehicle populations are finite and vehicles that are inspected are not re-inspected during the same sampling procedure. The probability mass function of the hypergeometric distribution, is given by A N−A ( , , )= a n−a , f a;N A n N (1) n denoting the probability of finding a VOR vehicles in the sample of size n given that there are a total of A VOR vehicles in the population of N. In function (1), a can range from 0 to n.

In the sample of size n where a vehicles are VOR, in order to determine the upper confidence limit to the total number of VOR vehicles, A, we need to compute the smallest integral value ÂU such that the probability of getting a or less VOR in the sample is some small quantity αU denoting the upper-confidence error. ÂU has to satisfy the inequality

a ∑ f ( j;N,ÂU ,n) ≤ αU . (2) j=0

Similarly the lower confidence limit ÂL is the largest integral value such that

n ∑ f ( j;N,ÂL,n) ≤ αL (3) j=a for some small quantity αL. These inequalities are derived in [11].

We need to solve equations (2) (3) and for n,ÂU and ÂL given specified error margins αU and αL. This can be computationally intensive and so the alternative, often used in practice, is to assume that

DRDC CORA TR 2007–023 11 the sample estimate p is approximately normally distributed. While this is an accepted approach, see Feller [12], Hájek, [13], Erdös and Rényi [14], and Madow [15], there is no safe general rule as to how large the sample size must be to use a normal approximation for computing confidence intervals. The error on the approximation depends on all the quantities, n, p,N,αU and αL. However, for the purpose of computing a sample size, using the normal approximation is acceptable. With this assumption we can approximately compute the variance V in the estimate of the proportion of = d2 VOR for the whole population using the equation V t2 where d is the desired precision level and t is the abscissa of the normal curve that cuts off an area, as per the selected confidence level, at the tails [11].

We will revert to the hypergeometric distribution in Section 4 when we infer the results from the sample to the population.

3.2 Stratified Sampling Model

In stratified sampling, the population of N units is divided into subpopulations, called strata,of N1,N2,...,NL units respectively. For our problem, the different vehicle types naturally form strata: the MLVW, LSVW, LUVW and HLVW form strata in the B fleet population; the BISON, TLAV, LEOPARD, COYOTE and LAV III form strata in the A fleet. A simple random sample is drawn from each strata of size n1,n2,...,nL respectively, where L is the number of different strata. Stratified sampling is used when: (1) estimates of population statistics of known precision are wanted for each strata (as well as the population as a whole): VOR rates per vehicle could indicate relative usage intensities; (2) strata have different characteristics or attributes: different vehicles have different reliabilities, different usage characteristics; or (3) stratification may produce a gain in precision if the strata have different degrees of variability: particular vehicle types are prone to break down more often due to the nature of the exercise, etc. In [2] it was found that there is significant difference between the historical VOR rates of armoured (A fleet) and unarmoured (B fleet) vehicles. The report suggested that stratified sampling was not required at the vehicle model level, but only at the fleet level. However, the varied failure rates calculated in Section 2 provide justification to apply stratified sampling at the vehicle model level. Furthermore, the inspection time per vehicle type can vary, especially in light of the top component failures compiled in Section 2. Stratified sampling takes advantage of these differences.

The suffix h denotes the stratum and i the unit within the stratum. Stratification theory deals with determining the best choices for sample size nh to obtain maximum precision. The equations in this section are derived in [11]. The notation is as follows:

12 DRDC CORA TR 2007–023 N Number of vehicles in fleet (A or B) Nh Number of vehicles of a type h n Number of vehicles sampled nh Number of vehicles to sample in stratum h Ah Number of VOR vehicles of a type h in population ah Number of VOR vehicles of a type h in sample Ph = Ah/Nh Proportion of VOR units of type h in population ph = ah/nh Sample estimate of Ph Wh = Nh/N Stratum weight

3.2.1 Stratified Sampling for Proportions

nh = Nh When n N for all h, then the sampling fraction is the same in all strata. This is called propor- tional allocation of the nh, giving a self-weighting sample. Define yhi, for i = 1...Nh,as1ifthe vehicle i is VOR, 0 otherwise. For stratified random sampling, the true variance of the yhi for a stratum is defined as

2 = Nh Sh PhQh (4) Nh − 1 where Qh = 1 − Ph. The stratum sample variance is

2 = nh sh phqh (5) nh − 1 = − 2 2 where qh 1 ph. ph is an unbiased estimate of Ph and sh is an unbiased estimate of Sh.

The estimate, pst (st for stratified), of the proportion of VOR vehicles in the population is calculated as

L Nh ph pst = ∑ , (6) h=1 N and the variance of the estimate pst is

1 L N2(N − n ) P Q V(p )= ∑ h h h h h , (7) st 2 − N h=1 Nh 1 nh

phqh PhQh where substituting for gives a sample estimate of V(pst ). For academic purposes, a full nh−1 nh derivation of V(pst ) is reproduced in Annex A.1.

The sample size n using stratified sampling for proportions is calculated by n¯ n = , (8) + n¯ 1 N L Nh ∑ phqh = N n¯ = h 1 , (9) V

DRDC CORA TR 2007–023 13 where V is the desired variance in the estimate of the proportion P for the whole population. Under d2 the normality assumption a rough estimate for V is t2 , where d is the desired precision level and t is the abscissa of the normal curve that cuts off an area corresponding to the risk of error at the tails [11]. The number of vehicles, nh, to sample per stratum h is calculated by N n = n h . (10) h N

3.2.2 Stratified Random Sampling: Optimum Allocation

The main resource concern is the number of maintainer hours spent on the sampling procedure. Stratified random sampling with optimal allocation theory allows us to either minimize the number of hours spent sampling while achieving the desired confidence and precision levels, or to maximize the sampling precision for an allocated number of hours. This section explains how to select the sample sizes nh in the respective strata to either 1) minimize the precision level (error) for a specified cost of taking the sample; or 2) minimize the cost of sampling for a desired precision level. We will assume a simple linear cost function,

L cost = C = ∑ chnh (11) h=1 where ch is the cost per unit that can vary from stratum to stratum. In our case the ch is the number of minutes required to inspect a vehicle of type h. The cost can vary, for example the number of minutes required to inspect a Leopard tank may be more than the number of minutes required to inspect a LUVW. In equation (11) fixed costs are omitted. Including them does not affect the calculations.

2 2 2 2 Minimizing the precision level, d, is the same as minimizing the variance as d = t σ ≈ t V(pst ) with V(pst ) defined by (7). Choosing nh to minimize V(pst ) for fixed C or C for fixed V(pst ) are both equivalent to choosing nh to minimize the product V(pst ) ·C. In this case the total sample size nh in a stratum is derived as √ W S / c n = n h h h . (12) h L √ ∑ (WlSl/ cl) l=1 This result indicates that it is better to sample more from a particular stratum (vehicle type) if the stratum is larger, is more variable, or if it takes less time for vehicle inspection. To compute nh in equation (12) we need to know what the value of n is. The solution depends on whether the sample is chosen to meet a specified cost or a specified variance. When the cost is fixed, we solve for n by substituting the optimum values of nh into function (11), giving

L √ C ∑ (WhSh/ ch) = n = h 1 . (13) L √ ∑ (WhSh ch) h=1

14 DRDC CORA TR 2007–023 When the precision level is fixed, we substitute the optimum nh into the V(pst ) formula, (7), to find L √ L √ ∑ (WhSh ch) ∑ (WhSh/ ch) = = n = h 1 h 1 , (14) L ( )+( / ) ∑ 2 V pst 1 N WhSh h=1

where n is the total number of vehicles to be inspected for a particular fleet. nh is the number of vehicles of type h (strata of the fleet) to be inspected (if nh is fractional we round down and add 2 2 one). The formulas are applied, using sh as an unbiased estimate of Sh, to both the A and B fleets (armoured and unarmoured vehicle fleets respectively) to compute the total sample size. The full derivations of the equations in this section are available in [11] and reproduced in detail in Annex A.2.

DRDC CORA TR 2007–023 15 4 VOR Inference

This section explains a model that uses Bayesian inference to infer the results of the sampling to the entire population. Sampling is done without replacement (a vehicle is not inspected twice), hence the hypergeometric probability distribution is used. We wish to estimate the total number, or proportion, of VOR vehicles (per vehicle type) based on the sample. We continue using the notation of the previous section:

N Number of vehicles in training exercise n Number of vehicles in sample A Number of VOR vehicles in population a Number of VOR vehicles in sample.

Prior to vehicle sampling, without any prior knowledge about A, the probability that i vehicles are VOR in the population is equal to the probability that j vehicles are VOR for all i, j ∈{0,...,N}. Vehicle sampling provides more information. At the very least, we know that there are at least a VOR vehicles in the population, and not more than N − (n − a) VOR vehicles. We use Bayes’ theorem [16] to compute the probability distribution for the possible values of A (given the sample data) ranging from a to N − (n − a). Using the notation of Vose [17], Bayes’ theorem is stated as:

P(B|Ai)P(Ai) P(Ai|B)= . (15) ∑P(B|A j)P(A j) j where the sum ranges over the possible values of j for which P(A j) exists. P(Ai|B) is the condi- tional probability of event Ai occurring given that event B has occurred. In our case, event B is the observation of a VOR vehicles in the sample of size n. Define P(Ai) to be the probability that there are exactly i vehicles VOR in the population. Since P(Ai)=P(A j) for all i, j ∈{0,...,N}, we can simplify (15) to

P(B|Ai) P(Ai|B)= . (16) N−(n−a) ∑ P(B|A j) j=a

Using the probability mass function of the hypergeometric distribution, P(B|Ai) is defined as i N−i ( | )= a n−a . P B Ai N (17) n Formula (17) computes the probability of finding a VOR vehicles in the sample of size n given that there are a total of i VOR vehicles in the population of N. Substituting (17) into (16), we find i N−i a n−a P(Ai|B)= . (18) N−(n−a) ∑ j N− j a n−a j=a

At this point it is useful to visualize the graph of P(Ai|B) for i = a,...,N − (n − a). The peak in the graph represents the most likely number of VOR vehicles, Aexp, in the population. This is the

16 DRDC CORA TR 2007–023 value of i that maximizes P(Ai|B): P(Aexp|B) ≥ P(A j|B) for all j = a,...,N − (n − a). In Figure 5 three example probability distributions of P(A j|B) are drawn. In all three examples N = 100. The distribution labeled 1 is the graph when a = 1,n = 3, the distribution labeled 2 is the graph when a = 10,n = 30, and the last distribution is the graph when a = 30,n = 90. In all three cases, the most likely number of VOR vehicles is 33, however the sharpness the peaks changes in relation to the sample size 2.

P(A i|B)

3

2

1

i

Figure 5: VOR Inference Examples

AU ∑ P(Ai|B) is the conditional probability of having between AL and AU VOR vehicles given that i=AL a VOR vehicles were observed in the sample. Using cumulative probability, we define best- and worst-case bounds on the number of VOR vehicles. Define the best-case number of VOR vehicles to be the number of vehicles ÂL such that with at most α probability (ex.: 5%) there are less than ÂL VOR vehicles in the population. For any chosen α we can compute the cumulative probability to find the largest ÂL for which

N−(n−a) ∑ P(Ai|B) ≥ 1 − α. (19)

i=ÂL

Similarly, we define the worst-case number of VOR vehicles to be the number of vehicles ÂU such that with at most β probability, there are more than ÂU VOR vehicles in the population. For any chosen β we can compute the cumulative probability to find the smallest value ÂU for which

ÂU ∑ P(Ai|B) ≥ 1 − β. (20) i=a

2. The optimal sample size, as calculated in Section 3, for a confidence level of 95% and precision level of 10% would be 49 vehicles.

DRDC CORA TR 2007–023 17 Computing ÂL and ÂU using (19) and (20) we can say that with probability at most α there are less than ÂL VOR vehicles and with probability at most β there are more than ÂU VOR vehicles in the population.

18 DRDC CORA TR 2007–023 5 End-of-Exercise VOR Projection

To predict the VOR rate of the fleet at the end of an exercise, we propose a Monte Carlo simulation algorithm. Given daily usage rates, either estimated or observed from sampling, we set up a model that takes into account historical and the TF exercise failure rates. Using stochastic modeling, we can compute not only the mean number of VOR vehicles to expect but also the distribution of outcomes that expresses the range of the number of VOR vehicles possible.

5.1 Stochastic Parameters

The probability that a vehicle becomes VOR can be modeled by the exponential failure distribution (See [18] for more on reliability theory): given the mean kilometers between failure (MKBF) of a vehicle, the probability that a vehicle becomes VOR in a training exercise of length d days traveling k km per day (where D = d · k is the total kms traveled) is

−D F(D)=1 − e MKBF . (21)

−D R(D)=e MKBF is the reliability of the vehicle. For 100 kms of travel, a vehicle with a MKBF of 1,000 km has a probability of failure of F(100 km)=0.0952, a 9.52% chance of breakdown. Hence if we had ten of these vehicles, we would expect to find one to be VOR by the end of the exercise. Similarly, if the training exercise was run ten separate times with one vehicle, we would expect to find the vehicle to be VOR at the end of one of the exercises. Monte Carlo simulation is based on the latter concept. We first fit probability distributions to any input variables with uncertainty and proceed to randomly sample once from each of these distributions to get a representative result. We iterate this process many times until we get statistically relevant results: a distribution of potential outcomes.

In our model, both the daily distance traveled and vehicle MKBF are stochastic input parameters to which we fit probability distributions. We model the daily distance traveled per vehicle type with a triangular distribution [17], 2(x−a) ( − )( − ) for a ≤ x ≤ c f (x;a,b,c)= b a c a (22) 2(b−x) ≤ ≤ , (b−a)(b−c) for c x b where a,b, and c represent the minimum, maximum, and most-likely values for the distance trav- eled. These parameters can either be measured via sampling or estimated by the MWC and TF commanders. The triangular distribution is typically used when only a subjective description of a population is given. The MKBF of each vehicle is computed from historical data (see Section 2). This data, the number of kilometers driven and the number of failures observed, is credible prior knowledge that can be analytically represented by a gamma distribution,

−x β α−1β −α ( α,β)=e x . g x; ∞ α−1 −x (23) 0 x e dx Bayesian reliability estimation methodology [19] treats the usage and failure parameters as random, not fixed, quantities and uses the historical information to construct a prior distribution model for

DRDC CORA TR 2007–023 19

6 VOR Management

We now turn to the problem of optimally managing VOR vehicles. At CFB Wainwright there can be as little as two weeks between TF exercises during which maintainers repair and service the vehicles in an effort to maximize availability of vehicles to meet the demands of the subsequent TF. What is the best way to allocate maintenance hours in order to meet the next TF demands? This optimization problem was modeled mathematically as an integer linear program (ILP): a system of decision variables directed by a linear objective function, constrained by linear inequalities (See [21] for more background). In this section the mathematical formulation is provided.

The variables of the ILP were chosen to represent the number of vehicles of each type to be repaired. We used the median number of labour hours required per repair for each vehicle type, obtained from 2003-2006 work orders (see Table 1), as the MTTR per VOR vehicle.

Let I be the set of vehicle types, BISON,LAVIII,COYOTE,LEOPARD,TLAV I = LUVW,MLVW,HLVW,LSVW and i the index of an item of I.

From the number of days in between exercises and the number of maintainers available, we compute the number of available labour hours, denoted LBR_HRS. The CFB Wainwright and TF specific input parameters needed are described below.

VORi number of VOR vehicles of a type i ∈ I at the end of the exercise CFBi total number of vehicles of type i at CFB Wainwright TFi number of vehicles of type i ∈ I required for next TF Ri number of vehicles of type i ∈ I that are available for next TF (on reserve / not VOR) Fi number of VOR vehicles of type i ∈ I that need to be fixed to meet next TF request Ai minimum % availability of vehicle of type i ∈ I desired for next TF hi median labour time to repair vehicle of type i ∈ I LBR_HRS number of labour hours available

The estimated total number of maintenance hours required to repair all VOR vehicles (based on the median labour times to repair) is

∑VORi · hi. (24) i∈I The number of vehicles that need to be repaired to meet the demand of the next TF is

∑Fi = ∑max{0,TFi − (CFBi −VORi)}. (25) i∈I i∈I

We define xi for i ∈ I to be integer variables representing the number of vehicles of type i fixed that are required for the next TF. Let yi for i ∈ I be integer variables representing the total number

DRDC CORA TR 2007–023 21 of vehicles of type i fixed before the start on the next TF exercise. Clearly xi ≤ yi for all i. The following basic constraints are needed:

xi ≤ Fi (26)

yi ≤ VORi (27)

xi ≤ yi, (28) upper bounding the number of vehicles repaired . The objective is to maximize vehicle availability, CFB the percentage of vehicles that are operationally ready (not VOR) at the base. Define ai to the be the availability of vehicle of type i ∈ I at the base, − CFB = − VORi yi . ai 1 (29) CFBi TF ∈ Similarly, define ai to the be the availability of vehicle of type i I for the next TF, − TF = − Fi xi , ai 1 (30) TFi imposing the constraint TF ≥ ai Ai (31) to enforce a set minimum availability per vehicle type if desired. The primary objective is to maxi- mize availability of the vehicles required for the next TF and the secondary objective is to maximize availability for all vehicles. Suitable weights w1 and w2 can be found to prioritize the two objectives as desired. We can choose to either maximize the overall availability, ⎛ ⎞ ∑ Fi − xi ⎝ i∈I ⎠ maximize z =w1 · 1 − (for the TF) (32) ∑ TFi ⎛ i∈I ⎞ ∑ VORi − yi ⎝ i∈I ⎠ +w2 · 1 − (for CFB) (33) ∑ CFBi i∈I or maximize the minimum availability per vehicle type, TF ≤ TF ∈ , u ai for i I (34) CFB ≤ CFB ∈ , u ai for i I (35) TF CFB maximize z = w1 · u + w1 · u . (36)

The main constraint is that the number of labour hours used does not exceed the number of hours available:

∑hi · yi ≤ LBR_HRS. (37) i∈I The solution to the formulated ILP determines the optimal number of vehicles to fix of each type.

22 DRDC CORA TR 2007–023 7 Model Implementation

The theoretical sampling, simulation, and optimization models presented in the previous sections have been implemented in Microsoft Excel [22] using Excel Solver and Visual Basic (VB) [23] macros where necessary. In this section we describe the tool developed; the input requirements; and the use of the four main modules. Screen shots of the Excel model are provided.

7.1 Sample Size Calculator

Figure 7 depicts the Sample Size Calculator. The user is required to select, via buttons, which stratified sampling model is to be used: proportional allocation, optimal cost allocation, or optimal precision allocation. For all choices, the user is required to specify the number of vehicles being used and the desired sampling confidence and precision levels. If additional data is available then optimal allocation should be chosen. In these cases the VOR variability and sampling cost for each vehicle needs to be inputted. In addition, if optimal precision allocation is selected, the user must also input the maximum total sampling cost (hrs) for both vehicle fleets. Lacking such data, proportional allocation should be selected. All user input cells are lightly shaded in yellow. The suggested number of vehicles to be sampled is automatically calculated and listed below the selected sampling model.

Figure 7: Sample Size Calculator

DRDC CORA TR 2007–023 23 7.2 VOR Inference Calculator

To initiate the VOR Inference Calculator, the user must input the actual number of vehicles sampled and the number of VOR observed. Sections B and C of Figure 8 show the user input areas of the VOR Inference Calculator. The inference model is implemented as a VB macro which is initiated by pressing the button labeled “COMPUTE” in section D where the inference results are subsequently displayed. The user can specify a confidence interval to get worst- and best-case VOR rates by selecting allowable error percentages as desired (see Figure 9).

Figure 8: VOR Inference Calculator

Figure 9: VOR Inference: Worst-Case and Best-Case Confidence Interval Errors

7.3 VOR Projection Calculator

The VOR Projection Calculator requires further user input. The Excel sheet is split up to capture additional vehicle sampling data and exercise/maintenance data.

Vehicle Sampling Data: For each vehicle type, the user must provide the total mileage observed (cumulative total) and also the estimated minimum, maximum and most-likely daily usage rate (kms/day) for the given exercise. See Figure 10 for a snapshot of the input area. Exercise & Maintenance Data: The exercise and maintenance section (Figure 11) captures exer- cise data (day of sampling, length of exercise, number of days between exercises) and main- tenance data (number of CFB Wainwright maintainers, number of TF maintainers, hours of

24 DRDC CORA TR 2007–023 Figure 10: VOR Projection Calculator: Vehicle Usage Input

availability, etc.). Based on this data, the VOR Projection Calculator calculates the number of maintenance hours available to repair VOR vehicles before the start of the next exercise.

Figure 11: VOR Projection Calculator: Exercise and Maintenance Input

The VOR Projection Calculator is implemented as a VB macro initiated by the press of a button. A quick result window, shown in Figure 12, displays the VOR inference and projection results by fleet type (A and B). The most likely VOR rate and the minimum and maximum VOR rates (based on the error percentages specified for the VOR Inference Calculator) are shown. Pressing the button labeled “COMPUTE” commences the VB macro implementation of the Monte Carlo simulation algorithm (which may take a few minutes to finish). The projected VOR rates are combined with the median work order labour times to compute the estimated number of maintenance hours that will be required to fix all VOR vehicles in the expected, best, and worst cases as per the selected error percentages. These numbers are compared to the actual number of labour hours available before the start of the next exercise. If there is adequate maintenance time, the total number of VOR labour hours is highlighted green, otherwise in red to indicate a potential deficiency.

DRDC CORA TR 2007–023 25 Figure 12: VOR Projection Calculator: Quick Results

More detailed VOR inference and projection results are presented on a separate Excel sheet. As shown in Figure 13, the VOR projection results are decomposed per vehicle type, and two his- tograms representing the distribution of the number of A fleet VOR vehicles and B fleet VOR vehicles, respectively, are graphed (not shown in figure). The lower bound on the number of VOR vehicles is calculated by taking the abscissa of the distribution (assumed to be normal) that cuts off an area based on the user-defined best-case error. The upper bound is calculated similarly based on the worst-case error.

7.4 VOR Management Optimizer

The VOR Management Optimizer is a tool designed to empower the MWC decision-making when exercises are scheduled back-to-back. Further user-inputted data is captured to list the requirements of the next TF and the capabilities of the base. Figure 14 shows the user input required.

26 DRDC CORA TR 2007–023 Figure 13: VOR Projection Calculator: Detailed Results

Figure 14: VOR Management Optimizer: Input

Based on the requirements for the two exercises, and the projected VOR rates, the Excel model provides a quick snapshot (see Figure 15) of the CFB’s readiness to host the next TF exercise prior to any maintenance actions. For each vehicle, the projected availability is computed giving the MWC an indication of which vehicles should have maintenance priority.

Figure 15: VOR Management Optimizer: Detailed Results

Finally, the Excel Solver add-on is employed via VB macros to implement the VOR Management model described in Section 6. The optimization prioritizes fixing VOR vehicles required for the

DRDC CORA TR 2007–023 27 next exercise and then all other VOR vehicles if excess labour time is available. Four different objectives are implemented: 1. maximize the minimum availability. The optimization favours maintenance on vehicles in order to maximize an availability threshold that all the vehicles exceed; 2. maximize the overall availability. The optimization maximizes the total availability of the vehicle fleet; 3. additional constraints on the minimum availability of each vehicle type are added to (1) and then optimized; and 4. additional constraints on the minimum availability of each vehicle type are added to (2) and then optimized. Figure 16 shows a screen shot of the VOR Management Optimizer. The four optimization buttons correspond to the four objective functions listed above. In the case of objectives (3) and (4), the user can input the minimum availability in the yellow-shaded input column. The grey-shaded cell displays the total number of labour hours used based on historical median WO labour hours. When the VOR Management Optimizer is used mid-exercise to determine readiness for the subsequent exercise, the user can toggle between the best-case, expected, and worst-case end-of-exercise VOR projection using the buttons provided. The user then presses the button corresponding to the ob- jective function desired. The VOR Management Optimizer should also be used post-exercise to provide more accurate results, when better estimates of the required labour hours per VOR vehicle are known. In all cases, the user can extract how many vehicles would not be repaired, due to labour hour constraints, and determine how many more labour hours (maintainers) would suffice to cover the deficiency.

Figure 16: VOR Management Optimizer: Optimization

28 DRDC CORA TR 2007–023 8 Application: Exercise MAPLE GUARDIAN 2007

At the end of April 2007, approximately 2,500 soldiers from the Québec-based 5 Canadian Mech- anized Brigade Group came to the Canadian Manoeuvre Training Centre (CMTC) at CFB Wain- wright to prepare for their mission in Afghanistan as part of Operation Athena. The exercise, called Exercise Maple Guardian 2007 (MG0701) was a training activity of immense scale, encompassing individual and collective exercises designed to prepare soldiers for the various types of situations that they are expected to confront in Afghanistan [24]. Figure 17 depicts vehicle usage during the exercise. The following 303 vehicles were lent out by CFB Wainwright for the 22-day exercise:

Bison 16 LUVW 47 LAV Coyote 22 MLVW 30 LAV III APC 74 LSVW 61 TLAV 4 HLVW 49 A Fleet Total: 116 B Fleet Total: 187

Figure 17: Vehicle Use During Exercise Maple Guardian 2007 (Photo Credit - Cpl Simon Duch- esne, JTF 3-07 photographer [24])

8.1 MG0701 Vehicle Sampling

On day 15 of MG0701, the MWC CFB/ASU Wainwright sent out a team of maintainers to sam- ple the vehicles in order to estimate the VOR rate. The Sample Size Calculator (Section 3) was applied to determine the required sample size. The MWC elected to utilize proportional allocation (for stratified sampling) indicating a desired confidence level of 95% and a precision level of 5%. No prior additional information was known, so the degree of variability for all vehicles was set to 0.5. The Sample Size Calculator computed that 54 A fleet vehicles and 65 B fleet vehicles should be inspected. Table 4 displays the proportional allocation to the individual vehicle types. Unfortu- nately the sampling success (the percentage of vehicles actually sampled compared to the suggested

DRDC CORA TR 2007–023 29 sample size) was 31% for the A fleet and 40% of the B fleet. The MWC noted that this was indeed much lower than desired [25]. In Table 4 the suggested sample sizes (as calculated by the Sample Size Calculator) are compared to the actual number of vehicles inspected. The last column, “% Success”, represents the percentage of the recommended number of vehicles that were inspected.

Table 4: Suggested & Actual Sample Sizes for MG0701 Vehicle Suggested Actual % Success BISON 8 2 25% LAV COYOTE 10 0 0% LAV III APC 34 15 44% TLAV 2 0 0% LUVW 16 11 69% MLVW 11 8 73% LSVW 21 1 5% HLVW 17 6 35%

8.2 MG0701 VOR Inference

Given the sampling data, the VOR Inference Calculator was applied to determine the additional information obtained from sampling about the vehicle VOR rates. Figures 18 and 19 graph the probability distributions, per vehicle type, of the number of VOR vehicles. For each vehicle, two distributions are plotted: the probability distribution before sampling (with no prior knowledge), and the probability distribution given the sampling data. The larger the sample size, the greater the difference between the two distributions. For the LAV Coyote and the TLAV, these distributions are the same as no vehicles of these types were sampled. The MWC selected 5% as an allowable best-case error, and 20% as an allowable worst-case error, indicating the livable overestimation and underestimation errors respectively. The results were for the A fleet, the most likely VOR rate was 16% and at most 33% with 80% certainty. For the B fleet, the most likely VOR rate was 11% and at most 39% with 80% certainty. Table 5 lists the detailed inference results 3. The columns labeled “Min” and “Max” correspond to the best- and worst-case bounds as chosen by the MWC. For aesthetic reasons, the headings of the tables in this chapter are abbreviated: vehicles, minimum, median, most likely, maximum, average, standard deviation, availability are represented by vehs, min, med, likely, max, avg, std dev, and avail respectively.

8.3 MG0701 VOR Projection

Sampling was only performed once during MG0701 due to the high operational tempo of the exer- cise. In order to get an idea of the expected VOR rate at the end of the exercise we employ the VOR Projection Calculator. As additional input, fleet usage data is required. Unfortunately odometer readings were not taken during the sampling procedure and the MWC was only able to guess the vehicle usage during the first 15 days of the exercise. In an attempt to get a more realistic estimate

3. The expected VOR rate for the LAV Coyote and TLAV is listed as 50% in the table. In reality all VOR rates, from 0% to 100%, for these vehicle type are equally likely given no prior knowledge nor sampling data.

30 DRDC CORA TR 2007–023 BISON LAV III APC

COYOTE TLAV

Figure 18: MG0701 VOR Inference: A Fleet VOR Probability Distributions

LUVW LSVW

MLVW HLVW

Figure 19: MG0701 VOR Inference: B Fleet VOR Probability Distributions of the daily mileage, FMS mileage data for the months of September, October and November of 2006 was extracted. During this time there were two exercises (MG0603 and MG0604) for a total of 55 exercise days. This data was used to get a most-likely estimate of the daily usage per vehicle during MG0701. The estimates entered into the VOR Projection Calculator are shown in Table 6. The best-case error was set to 5% and worst-case error was set to 20%.

DRDC CORA TR 2007–023 31 Table 5: VOR Inference for MG0701 Vehicle # Vehs Sample Size # VOR Min VOR Likely VOR Max VOR BISON 16 2 0 006 LAV COYOTE 22 0 0 01118 LAV III APC 74 15 1 1512 TLAV 4 0 0 023 LUVW 47 11 1 1410 MLVW 30 8 0 004 LSVW 61 1 0 0033 HLVW 49 6 2 61625

Table 6: Estimated Daily Vehicle Usage (Mileage) Data for MG0701 Vehicle most likely (kms) BISON 14.25 LAV COYOTE 23.4 LAV III APC 29 TLAV 0.3 LUVW 43.4 MLVW 9.34 LSVW 9.59 HLVW 24.65

The number of VOR vehicles and their estimated mileage data was combined with historical fail- ure data to compute the failure rate distribution as described in Section 5. The VOR Projection Calculator initiated a Monte Carlo simulation for 1000 iterations, outputting a distribution of the number of projected VOR vehicles by fleet type. The histograms of the number of VOR are shown in Figure 20 and vehicle specific statistics on the number of VOR vehicles are compiled in Table 7. The simulation results indicate the projected VOR rates at the end of Maple Guardian 2007: we should have expected 50% of the A fleet vehicles to be VOR and 36% of the B fleet vehicles to be VOR.

80

Frequency A Fleet Frequency B Fleet 60 60

40 40

20 20

45 50 55 60 65 70 75 30 40 50 60 Number of Vehicles VOR Number of Vehicles VOR

Figure 20: MG0701 End-Of-Exercise Projection Histogram of the Number of VOR Vehicles

32 DRDC CORA TR 2007–023 Table 7: MG0701 VOR Projection Calculator Statistics on the Number of VOR Vehicles Vehicle min avg max std dev BISON 0 4.68 11 1.79 LAV COYOTE 2 10.30 18 2.48 LAV III APC 29 42.93 57 3.97 TLAV 0 0.11 2 0.34 LUVW 2 11.08 21 2.76 MLVW 1 6.98 16 2.35 LSVW 2 9.87 19 2.82 HLVW 5 14.16 25 2.98

At the end of MG0701, the CFB Wainwright maintenance workshop performed safety inspections on 73 vehicles still in the field. From these, the A fleet VOR rate was found to be 22% and the B fleet VOR rate was found to be 12.5%. This was a limited inspection as many vehicles continued to be used for Live Fire Ranges during the following days. When all the vehicles returned to the base, a quick inspection of all the vehicles lent out reported the following VOR rate: 33% and 29%. Table 8 summarizes the actual observations compared to the simulation results including the best-case and worst-case limits. In this example, the VOR Projection Calculator over-estimated the VOR rate for both fleets. Table 8: MG0701 VOR Rates: Actual vs. Projected Fleet Actual VOR Best-Case Most Likely Worst-Case A Fleet 33% 42% 50% 54% B Fleet 29% 28% 36% 40%

8.4 MG0701 VOR Management

VOR management post MG0701 was not a major issue as the next large-scale exercise, Maple De- fender 2007 (MD0701) [26], was scheduled for August 2007 giving the Maintenance Workshop ample time to repair all vehicles. In order to demonstrate the use of the VOR Management Opti- mizer, we consider a hypothetical exercise identical to MG0701 commencing two weeks after the end of MG0701. We assume there are 12 CFB/ASU maintainers that can dedicate 5.25 hours of each day restoring the VOR vehicles considered. In addition, the TF has promised that 10 TF main- tainers will remain at the base for 3 extra days post-exercise to assist the CFB/ASU maintainers. This is in addition to the 10 hrs of maintenance that the TF allocates daily to VOR vehicles during the MG0701 exercise. The hypothetical data, shown in Table 9 was chosen to reflect the actual VOR rates of MG0701. Let us assume that the VOR Projection Calculator was run on day 15 of the exercise. At this point a total of 1109.5 labour hours are available prior to the start of the subsequent exercise and it is estimated that 1527 hours will have to be allocated to fix the VOR vehicles, 2848 hours in the worst-case. The projected availability (the percentage of required vehicles that are in working order), prior to maintenance, of the vehicles for the next exercise is listed in the last three columns of Table 9. These are based on the total number of vehicles per type at CFB Wainwright

DRDC CORA TR 2007–023 33 (second column).

Table 9: Example VOR Projection Calculator Results

Number VOR Labour Hours Availability for Next TF Vehicle # at CFB Wainwright min med max min med max min med max BISON 16 2 71539.5 138.25 296.25 6% 56% 88% LAV COYOTE 31 0 614 0 219 511 77% 100% 100% LAV III APC 121 16 31 45 388 751.75 1091.25 100% 100% 100% TLAV 14 0 14 0 21 84 100% 100% 100% LUVW 141 1 6148.75 52.5 122.5 100% 100% 100% MLVW 30 0 512 0 77.5 186 60% 83% 100% LSVW 130 0 614 0 69 161 100% 100% 100% HLVW 92 4 11 22 72 198 396 100% 100% 100% Totals 575 23 73 141 508.25 1527 2848

The VOR Management Optimizer was run with the objective set to maximizing the overall avail- ability of the vehicles. The optimization results, shown in Table 10 indicate that even in the worst- case, the maintenance crew should be able to handle the projected VOR rate in order to hand the necessary vehicles in operation condition to the subsequent TF exercise. The results also indicate that additional maintenance hours would be required to return all the vehicle fleets at CFB Wain- wright to 100% availability: an overall availability of 94% for the total fleet can be achieved in the expected case (after 1109.3 hours of maintenance), and 76% in the worst-case (after 1098 mainte- nance hours).

Table 10: Example VOR Management Optimizer Results

Expected-Case Worst-Case Vehicle # To be Fixed CFB Avail. TF Avail. # To be Fixed CFB Avail. TF Avail. BISON 7 100% 100% 15 100% 100% LAV COYOTE 5 97% 100% 7 77% 100% LAV III APC 24 94% 100% 14 74% 100% TLAV 1 100% 100% 1 79% 100% LUVW 0 96% 100% 0 90% 100% MLVW 5 100% 100% 12 100% 100% LSVW 0 95% 100% 0 89% 100% HLVW 6 95% 100% 0 76% 100%

34 DRDC CORA TR 2007–023 9 Conclusions and Advice 9.1 Summary

This study is a response to a VOR challenge at CFB Wainwright. Four models were developed and implemented in Microsoft Excel to aid the Maintenance Workshop Commander optimize vehicle sampling and manage vehicle VOR rates. The first model, the Sample Size Calculator, calculates the number of vehicles that should be sampled during a sampling procedure in order to get statisti- cally significant information. Stratified random sampling with either proportional, optimal cost or optimal precision level allocation is proposed. The second model, the VOR Inference Calculator, is used upon completion of vehicle sampling procedure. Based on specified error margins it provides the range of the estimated VOR rate of all the vehicles. The model is based on the hypergeometric probability distribution. The VOR Projection Calculator implements a Monte Carlo simulation al- gorithm that takes into account both historical and observed failure rate data, estimated or observed mileage, and expected repair times to project the end-of-exercise VOR rate and required number of maintenance hours. Finally, the VOR Management Optimizer provides a tool for the MWC to assess and give backed advice to CMTC Training Authority on VOR levels and to prioritize vehicle maintenance for subsequent training exercises.

This study also determined the most-likely repair times and vehicle failure rates based on vehicle usage and work order data, specific to vehicles at CFB Wainwright. In addition, for each vehicle type, a list of the top component failures was extracted.

The VOR Sample Size Calculator, VOR Inference Calculator and VOR Projection Calculator were successfully executed during exercise Maple Guardian MG0701. Unfortunately the Maintenance Workshop inspection team did not sample the number of vehicles as suggested by the VOR Sample Size Calculator. This led to limited information being inputted into the VOR Inference Calcula- tor. Similarly, vehicle odometer readings were not recorded and vehicle usage was only roughly estimated. Despite the limited or missing data, the models provided the MWC with scientifically- backed VOR rate estimates. These results have encouraged the MWC to enforce a more systematic sampling procedure for subsequent exercises.

9.2 Advice

The MWC is advised to experiment with the proposed models to measure their practical effective- ness. In doing so, the following should be respected: 1. to best utilize maintenance resources, the MWC should select (per vehicle) a subset of com- ponents that will be inspected during the sampling procedure. This selection should be based on the provided list of the top component failures and balanced with inspection time per ve- hicle. The relative inspection time per vehicle should then be estimated and inputted into the Sample Size Calculator in order to use stratified random sampling with optimal allocation. The calculator optimizes resource use for the desired confidence and precision levels taking into account relative sampling costs and VOR variability; 2. vehicle sampling should strive to attain the suggested sample size, utilizing a balance of TF technicians, vehicle operators and ASU technicians. Simple random sampling should be

DRDC CORA TR 2007–023 35 implemented. The vehicles to be inspected should be determined before reaching the forward operating base. The author suggests randomly choosing vehicles by their CFR plate numbers. An unrepresentative sample might suggest that the fleet is in better (or worse) condition than reality; 3. vehicle odometer readings should be noted before the start of an exercise and recorded when sampled. The TF should aid in providing the estimated daily mileage of each vehicle type (minimum, most-likely, and maximum kms); 4. vehicle failures and mileage should be carefully recorded in Work Order and FMS databases. Historical data used in the proposed models should be routinely updated in order to remain consistent; and 5. assessment of VOR levels should be based on the projected number of maintenance hours and the availability requirements for the next TF exercise. The VOR rate on its own does not provide enough information.

9.3 Future Work

Multiple factors influence the VOR rate of vehicles during a training exercise. The proposed models take into account observed failure data, but do not directly account for adverse climate/weather conditions, conditions of use, exercise tempo, availability of spare parts, quality of operator care, etc. While the historical failure rate data may encompass some of these items, further work can be done to include these factors. The current VOR inference model hinges on the vehicle sampling being representative and without biases. Future models may attempt to avoid the simple random sampling assumption in order to capture potential biases. Finally, while the model implementations are specific to the VOR challenge of CFB Wainwright, the theory is not and can be applied to other CF training centers. This should be considered pending experimentation of the model at CFB Wainwright.

36 DRDC CORA TR 2007–023 References

[1] Pond, G., Vehicle Fleet Maintenance Study at CFB Wainwright, 11-20-2006. [2] Pond, G.T. (2007), Initial Investigation of Vehicle Sampling for Inspection at CMTC, (DRDC CORA TM 2007–024) Defence R&D Canada – CORA. [3] Desmier, P.E. (2007), DMGOR responses to Call Letter FY 07/08. [4] The EME Handbook, Department of National Defence (Canada), 09-15-1995, B-GL-314-008/AM-002. [5] The Fleet Management System (January 2007) (online), http://fms.mil.ca. [6] A.G. Gallant, Director Materiel Information Systems 4-6-3, E-mail: Equipment Data, 02-08-2007. [7] Desmier, P.E. (2004), Estimating the Reliability of the Medium Logistics Vehicle Wheeled (MLVW), (Research Note RN 2004/03) ORD, Directorate of Operational Research (Corporate), Ottawa, Canada. [8] Kaluzny, B.L. and Erkelens, A.J. (2006), The Optimal MSVS Fleet Mix for First-Line Replenishment, (DRDC CORA TR 2006–026) Defence R&D Canada – CORA. [9] Maj. Morin, Director Land Requirements 6-2, E-mail: Mean Times Between Failures MSVS Study, 04-26-2006. [10] Maj. Fitzpatrick, OC Maint. CFO/ASU Wainwright, E-mail: CFB Wainwright VOR/Sampling OR problem, 03-13-2007. [11] Cochran, W.G. (1977), Sampling Techniques, 3 ed, New York: John Wiley & Sons. [12] Feller, W. (1957), An Introduction to Probability Theory and Its Applications, 2 ed, New York: John Wiley & Sons. [13] Hájek, J. (1960), Limiting distributions in simple random sampling from a finite population, Pub. Math. Inst. Hungarian Acad. Sci., 5, 361–374. [14] Erdös, P. and Rényi, A. (1959), On the central limit theorem for samples from a finite population, Pub. Math. Inst. Hungarian Acad. Sci., 4, 49–57. [15] Madow, L.H. (1948), On the limiting distributions of estimates based on samples from finite universes, Ann. Math. Stat., 19, 535–545. [16] Bayes, T. (1763), An essay towards solving a problem in the doctrine of chances, Philos. Trans. R. Soc. London, 53, 370–418. [17] Vose, D. (2001), Risk Analysis, 2 ed, New York: John Wiley & Sons. [18] Blischke, W.R. and Murty, D.N.P. (2000), Reliability, Modeling, Prediction, and Optimization, Wiley Series in Probability and Statistics, New York: John Wiley & Sons. [19] NIST/SEMATECH e-Handbook of Statistical Methods (June 2007) (online), http://www.itl.nist.gov/div898/handbook/. [20] Robert, C.P. and Casella, G. (2004), Monte Carlo Statistical Methods, 2 ed, New York: Springer-Verlag. [21] Wolsey, L.A. (1998), Integer Programming, New York: John Wiley & Sons.

DRDC CORA TR 2007–023 37 [22] Microsoft Office Excel 2003, htt p : //o f fice.microso ft.com/en − us/excel/FX100487621033.aspx. [23] Microsoft Visual Basic 6.3, msdn.microso ft.com/vbasic/. [24] Exercise Maple Guardian is in full swing at Wainwright! (May 2007) (online), http://www.dnd.ca/site/feature_story/2007/05/09_e.asp. [25] Maj. Fitzpatrick, OC Maint. CFO/ASU Wainwright, E-mail: RE: CFB Wainwright VOR/Sampling OR problem, 06-22-2007. [26] Exercise Maple Defender 2007 (August 2007) (online), http://www.army.forces.gc.ca/ExMapleDefender/home.html. [27] Stuart, A. (1954), A simple presentation of optimum sampling results, Journ. Roy. Stat. Soc., B16, 239–241.

38 DRDC CORA TR 2007–023 Annex A: Stratified Random Sampling Derivations

The theorems, lemmas, corollaries and their proofs and derivations presented in this appendix are well-established results from stratified sampling theory. They are presented succinctly in Cochran [11] (and further references therein) and are reproduced here in elaborate detail for academic pur- poses in order to make this paper self-contained and allow scientists to understand the mathematics behind sampling theory. The results in this section should not be attributed to the author.

A.1 Sample Estimate and Variance

In stratified sampling, the population of N units is divided into subpopulations, called strata,of N1,N2,...,NL units respectively. A simple random sample is drawn from each strata of size n1,n2,...,nL respectively, where L is the number of different strata.

The suffix h denotes the stratum and i the unit within the stratum. Further notation is as follows: N Number of vehicles in fleet (A or B) Nh Number of vehicles of a type h n Number of vehicles sampled nh Number of vehicles in sample of type h Ah Number of VOR vehicles of a type h in population ah Number of VOR vehicles of a type h in sample Ph = Ah/Nh Proportion of VOR units of type h in population ph = ah/nh Unbiased sample estimate of Ph Wh = Nh/N Stratum weight

Define yhi, for i = 1...Nh, as 1 if the vehicle i is VOR, 0 otherwise.

Theorem. If in every sample stratum the sample estimate ph is unbiased, then the estimate, pst (subscript st to indicate stratified sampling), of the total number of VOR vehicles in the population is calculated as

L Nh ph pst = ∑ . (A.1) h=1 N

Proof.   L L E[pst ]=E ∑ Wh ph = ∑ WhPh (A.2) h=1 h=1 since the estimates ph are unbiased in the individual strata. The population mean P may be written as

L Nh L ∑ ∑ ∑ yhi NhPh L = = = N P P = h 1 i 1 = h 1 = ∑ h h . (A.3) N N h=1 N

DRDC CORA TR 2007–023 39 Theorem. The variance of the estimate pst is

1 L N2(N − n ) P Q V(p )= ∑ h h h h h . (A.4) st 2 − N h=1 Nh 1 nh

To prove the theorem, we first show that the variance, V(ph), of the mean ph for a simple random sample for a strata h is 2 − 2 Sh Nh nh V(ph)=E((ph − Ph)) = . (A.5) nh Nh We then show that the variance

L L 2 ( )= 1 2 ( )= 1 ( − )Sh , V pst 2 ∑ NhV ph 2 ∑ Nh Nh nh (A.6) N h=1 N h=1 nh and complete the proof by deriving

2 = Nh . Sh PhQh (A.7) Nh − 1 Equations (A.5), (A.6) and (A.7) are stated as lemmas and proved in what follows.

Lemma. 2 − 2 Sh Nh nh V(ph)=E((ph − Ph)) = . (A.8) nh Nh

2 Proof. The variance of the mean ph of a simple random sample is E(ph −Ph) taken over all possible , ,..., samples. Let yh1 yh1 yhnh be a simple random sample.

nh ∑ yhi i=1 ph = (A.9) nh nh nh ph = ∑ yhi (A.10) i=1 nh nh(ph − Ph)=∑ yhi − nhPh (A.11) i=1 nh = ∑(yhi − Ph) (A.12) i=1 =( − )+( − )+···+( − ). yh1 Ph yh2 Ph yhnh Ph (A.13)

Every unit appears in the same number of samples (when considering all possible samples). This [ + + ... + ] + + ... + / means E yh1 yh2 yhnh is a multiple of yh1 yh2 yhNh where the multiplier is nh Nh

40 DRDC CORA TR 2007–023 since the first expression has nh terms and the second as Nh terms. It then follows that     ( − )2 + ···+( − )2 = nh ( − )2 + ... +( − )2 , E yh1 Ph yhnh Ph yh1 Ph yhNh Ph and (A.14)  Nh  ( − )( − )+( − )( − )+···+( − )( − ) E yh1 Ph yh2 Ph yh1 Ph yh3 Ph yh(nh−1) Ph yhnh Ph (A.15) ( − )   = nh nh 1 ( − )( − )+( − )( − )+···+( − )( − ) . yh1 Ph yh2 Ph yh1 Ph yh3 Ph yh(Nh−1) Ph yhNh Ph Nh(Nh − 1) In (A.15) the sum of products extends over all pairs of units in the sample and population re- spectively. The left-hand-side sum contains nh(nh − 1)/2 terms and the right-hand-side contains Nh(Nh − 1)/2 terms.

Now square (A.13) and average over all simple random samples.

2( − )2 =(( − )+( − )+···+( − ))2 nh ph Ph yh1 Ph yh2 Ph yhnh Ph (A.16) =( − )2 +( − )2 + ···+( − )2 yh1 Ph yh2 Ph yhnh Ph (A.17) + 2((y − P )(y − P )+(y − P )(y − P )···+(y − P )(y − P ))  h1 h h2 h h1 h h3 h hnh h hnh h ( 2( − )2)= ( − )2 +( − )2 + ···+( − )2 E nh ph Ph E yh1 Ph yh2 Ph yhnh Ph (A.18) + 2 ((y − P )(y − P )+(y − P )(y − P )···+(y − P )(y − P ))]  h1 h h2 h h1 h h3 h  hnh h hnh h 2 ( − )2 = ( − )2 +( − )2 + ···+( − )2 nhE ph Ph E yh1 Ph yh2 Ph yhnh Ph (A.19) + [( − )( − )+( − )( − )···+( − )( − )] 2E yh1 Ph yh2 Ph yh1 Ph yh3 Ph yhnh Ph yhnh Ph

Using (A.14) and (A.15) we obtain   2 ( − )2 = nh ( − )2 + ···+( − )2 nhE ph Ph yh1 Ph yhNh Ph (A.20) Nh   nh(nh − 1) + 2 (y − P )(y − P )+···+(y ( − ) − P )(y − P ) (A.21) ( − ) h1 h h1 h h Nh 1 h hNh h Nh Nh 1 = nh ( − )2 + ···+( − )2 yh1 Ph yhNh Ph (A.22) Nh  ( − ) +2 nh 1 ( − )( − )+···+( − )( − ) . yh1 Ph yh1 Ph yh(Nh−1) Ph yhNh Ph Nh − 1

DRDC CORA TR 2007–023 41 Completing the square of the cross-product term we have  2 ( − )2 = nh ( − )2 + ···+( − )2 nhE ph Ph yh1 Ph yhNh Ph (A.23) Nh n − 1 + h ((y − P )+···+(y − P ))2 − h1 h hNh h Nh 1  n − 1 − h (y − P )2 + ···+(y − P )2 − h1 h hNh h Nh 1 n n − 1 = h 1 − h (y − P )2 + ···+(y − P )2 (A.24) − h1 h hNh h Nh Nh 1  − + nh 1 (( − )+···+( − ))2 yh1 Ph yhNh Ph Nh − 1  − Nh = nh − nh 1 ( − )2 1 − ∑ yhi Ph (A.25) Nh Nh 1 i=1   ⎤ 2 − Nh + nh 1 − ⎦. − ∑ yhi NhPh (A.26) Nh 1 i=1

Nh (A.26) vanishes since ∑ yhi = NhPh, leaving i=1

( − ) Nh 2 ( − )2 = nh Nh nh ( − )2. nhE ph Ph ( − ) ∑ yhi Ph (A.27) Nh Nh 1 i=1

2 Division by nh gives − Nh 2 − ( )= ( − )2 = Nh nh ( − )2 = Sh Nh nh . V ph E ph Ph ( − ) ∑ yhi Ph (A.28) nhNh Nh 1 i=1 nh Nh

Lemma. L L 2 ( )= 1 2 ( )= 1 ( − )Sh . V pst 2 ∑ NhV ph 2 ∑ Nh Nh nh (A.29) N h=1 N h=1 nh

Proof.

L Nh ph pst = ∑ . (A.30) h=1 N

Nh pst is a linear function of the ph with fixed weights N . Hence the variance pst is expressed as L 2 L L 2 N2 ( )= Nh ( )+ Nh j ( , ) V pst ∑ 2 V ph 2 ∑ ∑ 2 2 Cov ph p j (A.31) h=1 N h=1 j>h N N

42 DRDC CORA TR 2007–023 where Cov(ph, p j) is the covariance of ph and p j. Since samples are drawn independently in differ- ent strata, all covariance terms equal zero.

L ( )= 1 2 ( ) V pst 2 ∑ NhV ph (A.32) N = h 1 L 2 − = 1 2 Sh Nh nh 2 ∑ Nh (A.33) N h=1 nh Nh L 2 = 1 ( − )Sh . 2 ∑ Nh Nh nh (A.34) N h=1 nh

Lemma.

2 = Nh . Sh PhQh (A.35) Nh − 1

Proof.

Nh ∑ yhi = Ah, (A.36) i=1

Nh ∑ yhi i=1 Ah = = Ph. (A.37) Nh Nh Note that

Nh 2 = = ∑ yhi Ah NhPh (A.38) i=1 since 12 = 1 and 02 = 0. Hence

Nh 2 ∑ (yhi − Ah) 2 = i=1 Sh (A.39) Nh − 1 Nh 2 2 ∑ y − NhA = hi h = i 1 (A.40) Nh − 1 = 1 ( − 2) NhPh NhPh (A.41) Nh − 1 = Nh ( − 2)= Nh ( − ) Ph Ph Ph 1 Ph (A.42) Nh − 1 Nh − 1 Nh = PhQh. (A.43) Nh − 1

DRDC CORA TR 2007–023 43 Corollary. s2 = nh p q and s2 is an unbiased estimate of S2. h nh−1 h h h h

Proof. The derivation of s2 = nh p q is similar to the derivation of S2 = Nh P Q presented in h nh−1 h h h Nh−1 h h nh Nh 2 2 ∑ (yhi−ph) ∑ (yhi−Ph) the previous lemma. To prove that s2 = i=1 is an unbiased estimate of S2 = i=1 we h nh−1 h Nh−1 first write 1 nh s2 = ∑[(y − P ) − (p − P )]2 (A.44) h n − 1 hi h h h h i=1  nh = 1 ( − )2 − ( − )2 . − ∑ yhi Ph nh ph Ph (A.45) nh 1 i=1

Now average over all simple random samples of size nh,   nh   [ 2]= 1 ( − )2 − ( − )2 . E sh − E ∑ yhi Ph E nh ph Ph (A.46) nh 1 i=1

Note that   nh Nh 2 nh 2 E ∑(yhi − Ph) = ∑(yhi − Ph) (A.47) i=1 Nh i=1 ( − ) = nh Nh 1 2, Sh (A.48) Nh

2 and by the derivation of Sh, (A.5), − [ ( − )2]=Nh nh 2. E n ph Ph Sh (A.49) Nh Substituting (A.48) and (A.49) into (A.46) we find

2 [ 2]= Sh [ ( − ) − ( − )] = 2. E sh nh Nh 1 Nh nh Sh (A.50) Nh(nh − 1)

A.2 Derivation of Sample Sizes

Stratified random sampling with optimal allocation theory allows us to either minimize the number of hours spent sampling while achieving the desired confidence and precision levels, or to maximize the sampling precision for an allocated number of hours. In this section we show how to derive the sample sizes nh in the respective strata to either 1) minimize the precision level (error) for a specified cost of taking the sample; or 2) minimize the cost of sampling for a desired precision level; or 3) sample by proportion.

44 DRDC CORA TR 2007–023 We use a simple linear cost function, L cost = C = ∑ chnh (A.51) h=1 where ch is the cost per unit that can vary from stratum to stratum. The fixed costs are omitted in equation (A.51) for simplicity. Including them does not affect the subsequent calculations.

2 Minimizing the precision level, d, is the same as minimizing the variance as d = tσ ≈ tV(pst ) with V(pst ) defined by (A.4). Choosing nh to minimize V(pst ) for fixed C or C for fixed V(pst ) are both equivalent to minimizing the product V(pst )·C. To examine this product, first rewrite V(pst ) (recall that W = N /N and S2 = Nh P Q ): h h h Nh−1 h h 1 L N2(N − n ) P Q V(p )= ∑ h h h h h (A.52) st 2 − N h=1 Nh 1 nh 1 L N2 − N n N P Q = ∑ h h h h h h (A.53) 2 − N h=1 nh Nh 1 L 2 − = 1 Nh Nhnh 2 2 ∑ Sh (A.54) N = nh h 1 L 2 2 2 = 1 Nh Sh − NhShnh 2 ∑ (A.55) N = nh nh h 1 L 2 2 2 = Nh Sh − NhSh ∑ 2 2 (A.56) = N nh N h 1 L 2 2 2 2 = Nh Sh − Nh Sh ∑ 2 2 (A.57) = N nh NhN h 1 L W 2S2 W 2S2 = ∑ h h − h h (A.58) h=1 nh Nh L W 2S2 L W 2S2 = ∑ h h − ∑ h h . (A.59) h=1 nh h=1 Nh Now minimizing V(p ) ·C is equivalent to minimizing st    L 2 2 L Wh Sh ∑ ∑ chnh (A.60) h=1 nh h=1 where the second term of (A.59) has been dropped since it is a constant. (A.60) can be minimized by applying the Cauchy-Schwarz [27] inequality that states 2 2 ≥ 2 ∑ah ∑bh ∑ahbh (A.61) with equality occurring if and only if bh is constant for all h. Applying the inequality (A.61) to ah minimize (A.60) we get      2 L 2 2 L L √ Wh Sh ∑ ∑ chnh ≥ ∑ WhSh ch (A.62) h=1 nh h=1 h=1

DRDC CORA TR 2007–023 45 implying that the minimal value of (A.60) occurs when √ √ chnh = nh ch = ∈{ ... }, W S K for all h 1 L (A.63) √h h WhSh nh for some constant K. Solving for nh we get √ nh = K ·WhSh/ ch (A.64)

L and since n = ∑ nl we have l=1 L √ n = ∑(K ·WlSl/ cl). (A.65) l=1

nh We can then compute the ratio n for all vehicle strata h, √ n K ·W S / c h = h h h , (A.66) n L √ ∑ (K ·WlSl/ cl) l=1 and in terms of total sample size nh in a stratum, we have √ W S / c n = n h h h . (A.67) h L √ ∑ (WlSl/ cl) l=1

To compute nh in equation (A.67) we need to know what the value of n is. The solution depends on whether the sample is chosen to meet a specified cost or a specified variance. When the cost C is fixed, we solve for n by substituting the optimum values of nh into function (A.51): L C = ∑ chnh (A.68) h=1 ⎛ ⎞ √ L ⎜ W S / c ⎟ C = ∑ c ⎜n h h h ⎟ (A.69) h ⎝ L √ ⎠ h=1 ∑ (WlSl/ cl) l=1 n L √ C = ∑ c (W S / c ) (A.70) L √ h h h h h=1 ∑ (WlSl/ cl) l=1 L √ C ∑ (WlSl/ cl) = n = l 1 (A.71) L √ ∑ (chWhSh/ ch) h=1 L √ C ∑ (WlSl/ cl) = n = l 1 . (A.72) L √ ∑ (WhSh ch) h=1

46 DRDC CORA TR 2007–023 When the precision level V(pst ) is fixed, we substitute the optimum nh into the V(pst ) formula, (A.59), to find

L W 2S2 L W 2S2 V(p )= ∑ h √h − ∑ h h (A.73) st W S / c = h h h = Nh h 1 n L √ h 1 ∑ (Wl Sl / cl ) l=1 L W 2S2 L W 2S2 V(p )+ ∑ h h = ∑ h √h (A.74) st W S / c = Nh = h h h h 1 h 1 n L √ ∑ (Wl Sl / cl ) l=1 L √ ∑ ( / ) L 2 2 WlSl cl L 2 2 W S = W S ( )+ h h = l 1 h √h V pst ∑ ∑ / (A.75) h=1 Nh n h=1 WhSh ch L √ ∑ ( / ) L 2 WlSl cl L √ NhWhSh l=1 V(pst )+ ∑ = ∑ WhSh ch (A.76) h=1 NNh n h=1 L W S2 1 L √ L √ V(p )+ ∑ h h = ∑(W S / c ) ∑ W S c (A.77) st N n l l l h h h h=1 l=1 h=1 L √ L √ ∑ (WhSh ch) ∑ (WlSl/ cl) = = n = h 1 l 1 . (A.78) L ( )+( / ) ∑ 2 V pst 1 N WhSh h=1

When the precision level V(pst ) is fixed, and we wish to sample the strata proportionately we = Nh ( ) substitute the nh n N into the V pst formula, (A.59), to find

L 2 2 L 2 2 Wh Sh Wh Sh V(pst )= ∑ − ∑ (A.79) Nh N h=1 n N h=1 h L 2 2 L ( )+ Wh Sh = 1 2 V pst ∑ ∑ WhSh (A.80) h=1 Nh n h=1 L ∑ ( 2) WhSh = h=1 n L (A.81) ∑ ( 2 2) Wh Sh h=1 V(pst )+ Nh L ∑ ( 2) WhSh = n = h 1 , (A.82) L ( )+ 1 ∑ ( 2) V pst N WhSh h=1

DRDC CORA TR 2007–023 47 which can be re-written using ph as an unbiased estimate of Ph, as in Section 3.2.1, simplified as n¯ n = , (A.83) + n¯ 1 N L Nh ∑ phqh = N n¯ = h 1 , (A.84) V with Nh assumed to be negligible 4. Nh−1

4. In sampling theory, the convention is to use the divisor N − 1 instead of N when approaching the theory by means of analysis of variance. Most results take a slightly simpler form, and are equivalent.

48 DRDC CORA TR 2007–023 List of symbols/abbreviations/acronyms/initialisms

ASU Area Support Unit BFC Base des Forces canadiennes BISON Bison Armoured Vehicle CCEM Centre canadien d’entraînement aux manœuvres CF Canadian Forces CFB Canadian Forces Base CFR Canadian Forces Registration CMTC Canadian Manoeuver Training Centre COYOTE Recce Coyote DLEPS Directorate Land Equipment Program Systems DMIS Director Materiel Information Systems DMGOR Directorate Material Group Operational Research DROGM Directeur - Recherche Opérationnelle (Groupe des matériels) EO Electronic Optronic FMS Fleet Management System FOO Forward Observer Officer HLVW Heavy Logistics Vehicle Wheeled hrs hours ILP Integer Linear Program kms kilometers LAV III APC Light Armoured Vehicle III Armoured Personnel Carrier LCIS Land Communications and Information Systems LEOPARD Leopard Tank LSVW Light Support Vehicle Wheeled LUVW Light Utility Vehicle Wheeled MKBF Mean Kilometers Between Failures MLVW Medium Logistics Vehicle Wheeled MSVS Medium Support Vehicle System MTTR Median Time to Repair MWC Maintenance Workshop Commander MWC Commandant de l’atelier de maintenance NATO North Atlantic Treaty Organization NSN NATO Stock Number TA Training Authority TF Training Force TLAV Tracked Light Armoured Vehicle USS Unité de soutien de secteur VB Visual Basic vehs vehicles VOR Vehicle Off-Road VOR Véhicule hors route WO Work Orders

DRDC CORA TR 2007–023 49 50 DRDC CORA TR 2007–023 Distribution List

Internal

1 ASG (Hard Copy + PDF) CMTC (Hard Copy + PDF) LFDTS (Hard Copy + PDF) DLSS G4 Maint (Hard Copy + PDF) DLSP (Hard Copy + PDF) LFWA/ CFB/ASU Wainwright/ Maint/ OC (2 Hard Copy + PDF) LFWA/ CFB/ASU Wainwright/ Maint/ GS Veh PL/ Anc PL IC (PDF) LFWA/ CFB/ASU Wainwright/ Maint/ Con O (PDF) DGLEPM/DLEPS (PDF) DGLEPM/ PMO CANSOFCOM/ EQPT MANAGER (Hard Copy) LFDTS/DGLCD/DAD (PDF) Director Army Training (PDF) DLR 3 (PDF) DLR 6 (PDF) Scientific Advisor Land (PDF) DRDC CORA//DG CORA/DDG CORA/SH J& C/Chief Scientist (1 copy on circulation) DRDC Valcartier ORT (PDF) DRDC CORA/Land Capability ORT (PDF) DRDC CORA/LFORT (PDF) DRDC CORA/DMGOR (PDF) DRDC CORA Library (Hard Copy + PDF) DRDKIM (3 PDF) Author (2 Hard Copy + PDF)

DRDC CORA TR 2007–023 51 This page intentionally left blank.

52 DRDC CORA TR 2007–023 DOCUMENT CONTROL DATA (Security classification of title, body of abstract and indexing annotation must be entered when document is classified) 1. ORIGINATOR (The name and address of the organization preparing the 2. SECURITY CLASSIFICATION (Overall document. Organizations for whom the document was prepared, e.g. Centre security classification of the document sponsoring a contractor’s report, or tasking agency, are entered in section 8.) including special warning terms if applicable.) Defence R&D Canada – CORA UNCLASSIFIED Dept. of National Defence, MGen G.R. Pearkes Bldg., 101 Colonel By Drive, Ottawa, Ontario, Canada K1A 0K2

3. TITLE (The complete document title as indicated on the title page. Its classification should be indicated by the appropriate abbreviation (S, C or U) in parentheses after the title.) Optimizing Vehicle Off-Road Assessment & Management at Canadian Manoeuver Training Centre

4. AUTHORS (Last name, followed by initials – ranks, titles, etc. not to be used.) Kaluzny, B.L.

5. DATE OF PUBLICATION (Month and year of publication of 6a. NO. OF PAGES (Total 6b. NO. OF REFS (Total document.) containing information. cited in document.) Include Annexes, Appendices, etc.) December 2007 68 27

7. DESCRIPTIVE NOTES (The category of the document, e.g. technical report, technical note or memorandum. If appropriate, enter the type of report, e.g. interim, progress, summary, annual or final. Give the inclusive dates when a specific reporting period is covered.) Technical Report

8. SPONSORING ACTIVITY (The name of the department project office or laboratory sponsoring the research and development – include address.) Defence R&D Canada – CORA Dept. of National Defence, MGen G.R. Pearkes Bldg., 101 Colonel By Drive, Ottawa, Ontario, Canada K1A 0K2

9a. PROJECT NO. (The applicable research and development 9b. GRANT OR CONTRACT NO. (If appropriate, the applicable project number under which the document was written. number under which the document was written.) Please specify whether project or grant.) N/A

10a. ORIGINATOR’S DOCUMENT NUMBER (The official 10b. OTHER DOCUMENT NO(s). (Any other numbers which may document number by which the document is identified by the be assigned this document either by the originator or by the originating activity. This number must be unique to this sponsor.) document.) DRDC CORA TR 2007–023

11. DOCUMENT AVAILABILITY (Any limitations on further dissemination of the document, other than those imposed by security classification.) ( X ) Unlimited distribution ()Defence departments and defence contractors; further distribution only as approved ()Defence departments and Canadian defence contractors; further distribution only as approved ()Government departments and agencies; further distribution only as approved ()Defence departments; further distribution only as approved ()Other (please specify):

12. DOCUMENT ANNOUNCEMENT (Any limitation to the bibliographic announcement of this document. This will normally correspond to the Document Availability (11). However, where further distribution (beyond the audience specified in (11)) is possible, a wider announcement audience may be selected.) 13. ABSTRACT (A brief and factual summary of the document. It may also appear elsewhere in the body of the document itself. It is highly desirable that the abstract of classified documents be unclassified. Each paragraph of the abstract shall begin with an indication of the security classification of the information in the paragraph (unless the document itself is unclassified) represented as (S), (C), (R), or (U). It is not necessary to include here abstracts in both official languages unless the text is bilingual.)

Canadian Forces Base/Area Support Unit Wainwright, Alberta is the home of the Canadian Ma- noeuver Training Centre, the national centre of excellence for collective training. All CF units that deploy on operations to Afghanistan come to Canadian Forces Base Wainwright to train as formed groups prior to deploying. The base supplies the vehicles required for their train- ing experience. The Canadian Forces Base Wainwright Maintenance Workshop Commander is responsible for ensuring operational fitness of the vehicle fleet and advises the Canadian Ma- noeuver Training Centre Training Authority on vehicle condition. Mid-exercise vehicle sampling (inspection) is employed to assess the vehicle off-road rate. The challenge is to optimize the pro- cess to minimize the resources (maintenance crew) required to give timely and sound feedback to the Training Authority, and to develop the capability to predict end-of-exercise vehicle off-road rates based on mid-exercise sampling. This study develops tools that enable the Maintenance Workshop Commander to quantify and justify the vehicle fleet readiness. A stratified random sampling model is proposed. The opti- mal sample size is calculated for the desired confidence and precision levels. Vehicle off-road inference and projection models are developed using probability theory and simulation. Mathe- matical optimization is used to optimize resource allocation and vehicle readiness for subsequent exercises.

14. KEYWORDS, DESCRIPTORS or IDENTIFIERS (Technically meaningful terms or short phrases that characterize a document and could be helpful in cataloguing the document. They should be selected so that no security classification is required. Identifiers, such as equipment model designation, trade name, military project code name, geographic location may also be included. If possible keywords should be selected from a published thesaurus. e.g. Thesaurus of Engineering and Scientific Terms (TEST) and that thesaurus identified. If it is not possible to select indexing terms which are Unclassified, the classification of each should be indicated as with the title.)

CMTC Integer Linear Program Maintenance Monte Carlo Optimization Reliability Sample Size Simulation Stratified Sampling Vehicle Off-Road Wainwright

DRDC CORA

www.drdc-rddc.gc.ca