
Signature-based detection of behavioural deviations in flight simulators - Experiments on FlightGear and JSBSim Vincent Boisselle 1 , Giuseppe Destefanis Corresp., 2 , Agostino De Marco 3 , Bram Adams 1 1 Computer Science, Polytechnique Montréal, Montreal, Quebec, Canada 2 Computer Science, CRIM Montreal - Brunel University, London, United Kingdom 3 Dipartimento di Ingegneria industriale, University of Naples Federico II, Napoli, Italy Corresponding Author: Giuseppe Destefanis Email address: [email protected] Flight simulators are systems composed of numerous off-the-shelf components that allow pilots and maintenance crew to prepare for common and emergency flight procedures for a given aircraft model. A simulator must follow severe safety specifications to guarantee correct behaviour and requires an extensive series of prolonged manual tests to identify bugs or safety issues. In order to reduce the time required to test a new simulator version, this paper presents rule-based models able to automatically identify unexpected behaviour (deviations). The models represent signature trends in the behaviour of a successful simulator version that are compared to the behaviour of a new simulator version. Empirical analysis on nine types of injected faults in the popular FlightGear and JSBSim open source simulators shows that our approach does not miss any deviating behaviour considering faults which change the flight environment, and that we are able to find all the injected deviations in 4 out 7 functional faults and 75% of the deviations in 2 other faults. PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2670v1 | CC BY 4.0 Open Access | rec: 23 Dec 2016, publ: 23 Dec 2016 1 Signature-based detection of behavioural deviations in 2 flight simulators - Experiments on FlightGear and 3 JSBSim 4 5 Vincent Boisselle 6 Polytechnique Montreal 7 [email protected] 8 Giuseppe Destefanis 9 CRIM, Montreal - Brunel University London 10 [email protected] 11 Agostino De Marco 12 University of Naples Federico II 13 [email protected] 14 Bram Adams 15 Polytechnique Montreal 16 [email protected] 17 Abstract 18 Flight simulators are systems composed of numerous off-the-shelf components 19 that allow pilots and maintenance crew to prepare for common and emer- 20 gency flight procedures for a given aircraft model. A simulator must follow 21 severe safety specifications to guarantee correct behaviour and requires an 22 extensive series of prolonged manual tests to identify bugs or safety issues. 23 In order to reduce the time required to test a new simulator version, this 24 paper presents rule-based models able to automatically identify unexpected 25 behaviour (deviations). The models represent signature trends in the be- 26 haviour of a successful simulator version that are compared to the behaviour 27 of a new simulator version. Empirical analysis on nine types of injected faults 28 in the popular FlightGear and JSBSim open source simulators shows that 29 our approach does not miss any deviating behaviour considering faults which Preprint submitted to PeerJ Computer Science December 23, 2016 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2670v1 | CC BY 4.0 Open Access | rec: 23 Dec 2016, publ: 23 Dec 2016 30 change the flight environment, and that we are able to find all the injected 31 deviations in 4 out 7 functional faults and 75% of the deviations in 2 other 32 faults. 33 Keywords: behavioural deviation, performance regression testing, flight 34 simulator 35 1. Introduction 36 Aircraft simulators [3] reproduce common scenarios such as “take-off”, 37 “automatic cruising” and “emergency descent” in a virtual environment [44]. 38 The trainees then need to act according to the appropriate flight procedures 39 and react in a timely manner to any event generated by the simulator [25]. 40 Since a simulator is the most realistic ground-based training experience that 41 these trainees can obtain, aircraft companies all invest significant amounts 42 of money into simulator training hours. 43 To ensure that a simulator is realistic, it must undergo onerous quali- 44 fication procedures before being deployed. These qualification procedures 45 [1, 24, 39] need to demonstrate that the simulator behaviour represents an 46 aircraft with high fidelity, including an accurate representation of perfor- 47 mance and sound. There are four levels of training device qualification, from 48 A to D, as described by the Federal Aviation Administration (FAA) with 49 one hour of flight training in a level D training device being recognized as 50 one hour of flight training in a real aircraft. Since re-qualification is mostly 51 manual (requiring to book actual pilots), the total process can last from a 52 week to several months. 53 The high cost of training device qualification derives partly from the fact 54 that a simulator training device is composed of sophisticated off-the-shelf 55 components from 3rd party providers (aircraft manufacturers for instance) 56 that need to be treated as black-boxes during the integration process. Each 57 provider of a COTS can use its own format to describe the interface of the 58 COTS, yet the producer of the simulator needs to integrate all components 59 in a coherent way without access to the components’ source code [35, 53]. 60 Upgrades of a single component (engine, hydraulic component, electrical sys- 61 tem) could lead to a complete flight simulator re-qualification process. 62 Re-qualification is an important problem to tackle since the process is very 63 expensive and it is one of the main sources of expenses for flight simulator 64 companies. Upgrades, new versions, and new functionality are continuously 2 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2670v1 | CC BY 4.0 Open Access | rec: 23 Dec 2016, publ: 23 Dec 2016 65 delivered from COTS providers to flight simulator companies. The changes 66 these components can introduce can vary from different output values (dif- 67 ferent thresholds, different unit of measures) to completely new interfaces. 68 The main challenge is to avoid a complete re-qualification of the whole flight 69 simulator, by focusing the attention only to a single or a specific set of com- 70 ponents. Since the avionics domain lacks robust tool support for automating 71 tasks such as analysis of COTS interactions, change propagation [27], test 72 selection and test prioritization, even a small modification to a COTS still 73 requires (manual) re-qualification of the whole system [18] [31] [34] [50]. 74 What further hampers the automation of these tests is the large amount 75 of data gathered during the qualification tests. Despite the safety-critical 76 nature of these tests, the test data needs to be analyzed manually which is 77 not only tedious but also risky, since any missed anomaly potentially can 78 be life-threatening as incorrect behaviour of the simulator could incorrectly 79 train pilots and crew members and cause tragic accidents [46]. Since the 80 analysis of large data to identify deviating trends compared to previous ver- 81 sions of a software system are common to other domains as well, such as 82 the analysis of software performance test results, this paper adapts a state- 83 of-the-art technique for software performance analysis [22] to automatically 84 detect whether a new simulator version exhibits behaviour that is out of 85 the ordinary. Instead of building a signature of normal system performance 86 based on performance-related metrics, then comparing the signature to the 87 metrics of a new test run to find performance deviations, our signatures are 88 rules that describe the normal functional behaviour of a simulated aircraft 89 observed in successful versions of a flight simulator, with deviations of these 90 rules indicating deviating behaviour of an aircraft. Instead of collecting soft- 91 ware performance metrics such as CPU utilization, we need to collect specific 92 flight metrics that are related to the aircraft’s functional behaviour. 93 For example, if the correct behaviour of a particular flight scenario on a 94 successful version of a simulator exhibits high speed when the engines run at 95 maximal capacity, there would be a signature rule “max. engine capacity 96 => high speed”. A deviation would then be a case where, for a new 97 version of the simulator, the aircraft suddenly would have a low speed while 98 the engines still run at the maximal capacity. Of course, our approach needs 99 to be robust against noise, where only during a short period a rule would be 100 violated. 101 After calibration of the approach, we perform an empirical study on the 102 FlightGear and JSBSim [6] open source simulators at different granularity 3 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2670v1 | CC BY 4.0 Open Access | rec: 23 Dec 2016, publ: 23 Dec 2016 103 levels, in which we introduce 2 faults related to the flight environment and 7 104 faults related to flight behaviour to address the following research questions: 105 RQ1: What is the performance of the approach for faults that 106 change the flight environment? It is possible to have no false alarms for 107 one environmental fault, whereas flagging all deviations for a second envi- 108 ronmental fault is not possible without having 50% false alarms. 109 RQ2: What is the performance of the approach for functional 110 faults? 111 We are able to find all injected deviations in 4 out of 7 faults, and 75% of 112 the deviations in 2 other faults, yet especially for JSBSim the precision using 113 the initial thresholds is low (i.e., less than 44%). 114 2. Background and Related work 115 Flight simulators are tested like real aircraft. The pilot performing those 116 simulator tests takes the pilot seat and goes through a list of procedures, 117 specified in the Acceptance Test Manual, by performing different actions with 118 the cockpit control inputs.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages42 Page
-
File Size-