Bachelor Thesis Electrical Engineering June 2020

Raspberry Pi Based Vision System for Foreign Object Debris (FOD) Detection

Sarfaraz Ahmad Mahammad Sushma Vendrapu

Department of Mathematics and Nature Sciences Blekinge Institute of Technology SE–371 79 Karlskrona, Sweden This thesis is submitted to the Department of Mathematics and Nature Science at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Bach- elor in Electrical Engineering with Emphasis on Telecommunication.

Contact Information: Authors: Sarfaraz Ahmad Mahammad E-mail: [email protected] Sushma Vendrapu E-mail: [email protected]

Supervisor: Prof. Wlodek J. Kulesza

Industrial Supervisors: Dawid Gradolewski Damian M. Dziak Address: Bioseco Sp. z o. o. Budowlanych 68 Street 80-298 Gdansk´ Poland

University Examiner: Irina Gertsovich

Department of Mathematics and Nature Sci- Internet : www.bth.se ence Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract

Background: The main purpose of this research is to design and develop a cost-effective system for detection of Foreign Object Debris (FOD), dedicated to . FOD detection has been a significant problem at airports as it can cause damage to aircraft. Developing such a device to detect FOD may require complicated hardware and software structures. The proposed solution is based on a computer vision system, which comprises of flexible off the shelf components such as a Raspberry Pi and Camera Module, allowing the simplistic and efficient way to detect FOD. Methods: The solution to this research is achieved through User-centered design, which implies to design a system solution suitably and efficiently. The system solu- tion specifications, objectives and limitations are derived from this User-centered design. The possible technologies are concluded from the required functionalities and constraints to obtain a real-time efficient FOD detection system. Results: The results are obtained using background subtraction for FOD detection and implementation of SSD (single-shot multi-box detector) model for FOD classification. The performance evaluation of the system is analysed by testing the system to detect FOD of different size for different distances. The web design is also implemented to notify the user in real-time when there is an occurrence of FOD. Conclusions: We concluded that the background subtraction and SSD model are the most suitable algorithms for the solution design with Raspberry Pi to detect FOD in a real-time system. The system performs in real-time, giving the efficiency of 84% for detecting medium-sized FOD such as persons at a distance of 75 meters and 72% efficiency for detecting large-sized FOD such as cars at a distance of 125 meters, and the average frame per second (fps) that is the system ’s performance in recording and processing frames of the area required to detect FOD is 0.95.

Keywords: Airports, Computer vision, Performance evaluation, Real-time systems, User Centered Design, Web design. Acknowledgements

We would like to express our gratitude to the Bioseco Company for assigning both of us in this project. We would like to express our gratitude to Dawid Gradolewski and Damian Dziak. They guided us in the working process and help us with both software and hardware problems. Thank you for giving the opportunity to participate in such unique project.

We would also like to thank our supervisor Prof. Wlodek J. Kulesza, for pro- viding us guidance, suggestions, and comments. He gave us inspiration and motivation during the realization of this project.

We sincerely thank our examiner Irina Gertsovich for assigning the project. We would also thank our parents and friends, especially to Mr. Mohammad Ali and Mr. Ajay Kumar for helping us in this project.

This research was conducted within grant "Carrying out research and development works necessary to develop a new autonomous FAUNA MONITORING SYSTEM (AFMS) reducing the number of collisions between aircraft and birds and mammals" (No. POIR.01.01.01-00-0020/19) from The National Centre for Research and Development of Poland.

ii Contents

Abstract i

Acknowledgements ii

List of Figures v

List of Tables vi

Acronyms vii

1 Introduction 1

2 Survey of Related Works 3 2.1 Problem Overview ...... 3 2.2 Hardware Solutions ...... 5 2.3 Software Solutions ...... 6 2.4 Summary ...... 7

3 Problem Statement, Objectives and Main Contributions 8 3.1 Problem Statement ...... 8 3.2 Thesis Objectives ...... 9 3.3 Main Contributions ...... 9

4 System Design and Modeling 11 4.1 System Design ...... 11 4.2 System Modeling ...... 16 4.2.1 Hardware Model ...... 16 4.2.2 Software Model ...... 17

5 System Implementation, Prototyping and Validation 19 5.1 System Implementation and Prototyping ...... 19 5.1.1 Hardware Implementation and Prototype ...... 19 5.1.2 Software Implementation and Prototype ...... 22 5.2 Validation ...... 30

iii 6 Discussion 36

7 Conclusions and Future Works 40 7.1 Conclusions ...... 40 7.2 Future Works ...... 41

References 43

Appendices 48

Appendix A 49 A.1 Program Listing of FOD Detection and Classification ...... 49

Appendix B 54 B.1 Program Listing of Flask based Web Server ...... 54 B.2 Program Listing of Web Pages displayed by Web Server ...... 56

iv List of Figures

4.1 Proposed design process for FOD detection system ...... 12 4.2 Block diagram of FOD detection system model ...... 17 4.3 Software modeling of the FOD detection system ...... 18

5.1 Connection of Raspberry Pi and camera module [35] ...... 20 5.2 Testing Raspberry Pi and camera module ...... 21 5.3 System prototype for FOD detection (front view) ...... 21 5.4 System prototype for FOD detection (back view) ...... 21 5.5 System flowchart of detecting, classifing and notifying the presence ofFOD...... 23 5.6 Flowchart for background subtraction on the system to detect FOD . 24 5.7 Flowchart of FOD classification on the system ...... 27 5.8 Flowchart of web interface on the system ...... 28 5.9 Web interface display during no FOD occurrence ...... 29 5.10 Web interface response during FOD occurrence ...... 29 5.11 Web page displaying the data of occurred FOD ...... 30 5.12 Detection of FOD (person) at a distance of 25 meters ...... 32 5.13 Detection and classification of multiple moving objects ...... 34

6.1 Graphical representation of system efficiency for detecting and clas- sifying medium size FOD - a person ...... 38 6.2 Graphical representation of system efficiency for detecting and clas- sifying large size FOD-acar...... 39

v List of Tables

2.1 Sources, types of FOD and causes of occurrence [14] ...... 4 2.2 State-of-the art technologies & research works ...... 7

4.1 Technologies and algorithms related to itemized functionalities and constraints (selected technologies are bolded) ...... 15

5.1 Detection and classification of FOD (person) for different distances . 33 5.2 System efficiency for FOD (person) detection ...... 33 5.3 System efficiency to classify the detected FOD (person) ...... 34 5.4 Detection and classification of FOD (car) for different distances . . . 35 5.5 System efficiency for FOD (car) detection ...... 35

6.1 System efficiency to detect FOD ...... 37

7.1 Efficiency of the system to detect FOD for different ranges ...... 41

vi Acronyms

CNN Convolutional Neural Network.

CSS Cascading Style Sheets.

DNN Deep Neural Network.

FAA Federal Aviation Administration.

FFC Flexible Flat Cables.

FOD Foreign Object Debris.

FoV Field of View.

FPS Frame Per Second.

GPS Global Positioning System.

HTML Hyper Text Markup Language. mAP mean Average Precision. mmWave Millimeter Wave.

MoG Mixture-of-Gaussians.

RPN Region Proposal Networks.

SA Surface Area.

SSD Single Shot Detector.

STN Spatial Transformer Network.

UDD User-Driven Design.

WPT Wireless Power Transmission.

vii Chapter 1 Introduction

The problem with Foreign Object Debris (FOD) at the airports has increased rapidly in recent years. It is observed that accidents due to FOD occur mainly at airport runways, gateways and taxiways [1]. In unlikely situations, it can cause damage to the aircraft tires or engines excluding them from operating. The resulting situation also gives rise to the substantial delay of multiple aircraft and in extreme cases, it can cause an accident with the possibility of casualties. Based on the research done by the French Study on Automatic Detection Systems, over 60 % of the collection known FOD items were made of metal, followed by 18 % made of rubber [2].

That is to say, FOD arises to a big problem in the aviation industry that im- pacts the security of aircraft. For this reason, in recent years, several research works were performed to develop a suitable solution for FOD detection. The financial loss to the aviation industry is estimated to be 4 billion dollars per year [3]. Besides the money, there are also invaluable losses, like in the year 2000 where Air France flight 4590 was crashed due to a small metal strip resulting in in-flight fire and loss of control. The metal strip was caused by the continental flight, which took off from the same moments ago. Unfortunately, this crash resulted in 113 casualties [4].

The detection of birds and other animals at the airport runways is challenging, due to the necessity of monitoring the vast area around the runway. Damage of aircraft is mostly caused by bird ingestion into engines. Along with the birds many kinds of mammals also result in damaging the aircraft due to improper security fencing of airports. Unfortunately during one incident, a deer resulted in crashing of an aircraft at Sitka’s runways [5], and in 2015 a kangaroo caused an aircraft crash [6].

In many airports, wildlife collisions with aircraft are on the rise. According to the Federal Aviation Administration (FAA), the overall number of strikes raised from about 1,800 in 1990 to over 16,000 in 2018 [7]. Further, with the rising frequency of wildlife impacts, more focus has been given to wildlife vulnerability analysis and the maintenance of airfield biodiversity [8]. Therefore, the mainte- nance of runway plays an essential role in the safety of the aircraft.

1 Chapter 1. Introduction 2

This project aims is to design and develop a vision-based FOD detection sys- tem that can monitor airport runways as well as taxiways and apron areas. It is important to choose a suitable algorithm and a flexible cost-efficient device with a good performance. To develop an optimal solution, we used the User-driven design (UDD) approach. The system is implemented based on the requirements of stakeholders and also future users. For FOD detection, we require algorithms and technologies with high-reliability, which are finalised after analysing the functionalities table. This paper primarily focuses on the implementation of cost-efficient FOD detection and classification system.

This thesis is structured as follows: Chapter 2 gives a general overview of the most recent research work regarding the FOD detection and its taxonomy. The third chapter shows the problem statement of presented research as well as its objectives and contributions. The fourth chapter is about system design and modeling using User-driven design methodology. In the fifth chapter, a system prototype and the real world tests and evaluation are presented. The sixth chapter consists of discussion about obtained results and its outcome. The seventh chapter is about the conclusion and future works of the proposed solution. Chapter 2 Survey of Related Works

In today’s world, almost everything is being controlled and monitored by the latest technologies. The need for FOD detection was observed not only at airports [9] but among the others at railway tracks [10], ports [11] etc. The FOD detection systems are not only used for detecting threat objects but also for many other applications such as in Wireless Power Transfer system (WPT) [12] or chest X-rays [13]. In WPT, when there is a presence of FOD in between the transmitter coil and receiver coil, it can cause a fire accident for which FOD detection system is necessary. The existence of FOD, such as buttons on cloth- ing, can reduce the performance of chest X-rays for identifying pulmonary diseases.

This chapter gives general overview of technologies and algorithms, that can be used for FOD detection at airports. This chapter is divided into two sections, the overview and the research works. The overview part consists of the research of statistical analysis of FOD, which relates to the origin of different kinds of FOD with its resulting damage for different circumstances. Research works cover the study of compositions of different technologies, their architecture hierarchy and their final results to overcome the problem of FOD.

2.1 Problem Overview

FOD can result from many circumstances, and it can consist of any substance (living or non-living) as well as it can be of any dimensions (can be measured as area enclosing the FOD in m2). The FOD may be divided in many ways regarding its type and origin [14]. The Nature, Technology and Human are the most common reasons for FOD occurrence. Table 2.1 shortly summarizes the sources of FOD, their types and the causes.

The main complications, which may occur due to FOD, concerning aircraft or airport management are damaging the engine due to ingestion of FOD, ruining the tyre as well as damaging the structure of aircraft and reducing its performance. Moreover, these incidents may drive to disrupting the normal functioning of the airport resulting in revenue loss. These complications are explained briefly further.

3 Chapter 2. Survey of Related Works 4

Table 2.1: Sources, types of FOD and causes of occurrence [14]

Source Type Causes of occurrence Animals - due to improper Wildlife, Plant fragments, security fence. Nature Broken branches, Stones, Avi fauna - through the air. Snow, Ice. Miscellaneous objects - due to wind. - Due to shattering of loose parts of aircraft while Technological Fuel caps, Landing gear takeoff or landing. artefacts fitting, Nuts, Metal sheets. - Due to wind depositing the objects. Mechanics tools, - Due to authorities careless- Catering supplies & ness during a regular Human their debris, Personel inspection of runways. artefacts badges, Maintenance & - Due to wind depositing construction equipment apron items.

Engines are highly susceptible to objects of any kind (either soft or hard and irrespective of size and shape). Once they strike the working engine, they can damage the rotating blades, static vanes or other parts of the engine. Rotatory blades can be easily bent by such objects, which results in lower efficiency perfor- mance of the engine. In rare instances, they can damage the whole structure of the engine [15] leading to the aeroplane crashes. One of the incidents is Miracle on the Hudson in 2009, in which Airbus A320-214 lost thrust on both engines due to ingestion of large birds into engines after immediate takeoff [16]. The aircraft was landed on the Hudson river safely with no fatalities.

Another example is tyre damages. In some cases, FOD can result in detach- ment of tyre treads, which can cause damage to other parts of aircraft or even cause difficulties to the other aircraft, which are going to take off or land on the same runway. There are certain instances where the FOD object can be fully penetrated inside the aircraft tyre and be undetectable during the inspection. Such cases can result in tyre bursts and cause huge loss [15]. Example of this was a 43-inch metal strip left on the runway infiltrated into the aircraft tyre in crash in 2000. The metal piece caused tyre-burst, which in turn ruptured fuel tank resulting in a plane crash with 113 fatalities [17].

The areas such as (aircraft’s main body section), wings, nose, the windshield can also be damaged by FOD. The damage to the aircraft’s structure Chapter 2. Survey of Related Works 5 results in an aerodynamic loss. If the aircraft nose gets damaged due to FOD, it could corrupt the aircraft’s radar system, which then gives faulty readings resulting in more complicated things [15].

Another undesirable result of FOD is the disruption of the normal function- ing of the airport. Moreover, when the aircraft gets damaged due to FOD on the runways, it causes flight delays, cancellation of flights and revenue loss. Due to such cases, additional works by employees is also required [15].

There has been a lot of research works focusing on the development of a suitable FOD detection system. The crucial part of each detection system is the algorithm and its implementation in real-time. The results obtained from the research work by choosing the desired algorithm is also explained briefly. The Bioseco company of Poland [18] is also working on the research and development that is necessary to develop a new autonomous AIRPORT FAUNA MONITORING SYSTEM (AFMS) reducing the number of collisions between aircraft and birds and mammals. This project aims to conduct research and development efforts, which will allow the development of a prototype of the new AIRPORT FAUNA MONITORING SYSTEM [19].

2.2 Hardware Solutions

This section contains solutions depending upon hardware components or sensors to detect FOD. These solutions can be obtained using radar technologies and lidar technologies. Explicit modeling and installation of the system are necessary for detecting FOD. Maintenance of the sensors is also required for the effectiveness of the system.

FOD detection can be achieved using radar technology. One such research work is Millimeter-wave Ground-based Synthetic Aperture Radar Imaging for Foreign Object Debris Detection. In this research work, the ground-based synthetic aperture radar (GB-SAR) technique is used to study millimeter-wave imaging of FOD present on the ground. The experiments performed for detecting FOD such as braces and screws using millimeter wave frequency bands from 32 GHz to 36 GHz and from 90 GHz to 95 GHz. In addition to this, matched-filter algorithm is performed to reconstruct the image with proper system parameters and to comprehend the factors affecting the system for successful detection of FOD. The system can successfully detect the screws at a distance of 2 meters along with FOD size (40 mm L × 12 mm W). However, modifications are needed for detecting FOD at a long distance [20]. There are several other research works such as millimeter-wave radar for foreign objects debris detection on airport runways [21], millimeter-wave imaging of foreign object debris (FOD) based on Chapter 2. Survey of Related Works 6 a two-dimensional approach [22]. But installing such systems requires additional permission from airport management as it can be interrupted with signals of aircraft. Implementation and maintenance of the system is expensive, which makes this system hard to install at small and medium-sized airports.

2.3 Software Solutions

This section contains solutions depending upon software programs to detect FOD. The working of these programs also depends on the system capabilities to run the program effectively. Most of these solutions require a camera module to take input of required area and identify the presence of FOD.

In Video-based Foreign Object Debris detection [23], [24], the algorithm compares the images taken at different times to identify if any FOD has entered the region. Several cameras are fixed alongside the runway, which captures images at real- time. These images are then performed with multiple image processing techniques such as image pre-processing for intensity normalization of the captured image to filter out the unwanted changes, which may occur due to camera instability. As the airport runway is the static scene, a constant background is subtracted from the current captured frame. Finally, image post-processing, which includes noise removal based on mean filtering algorithm and the morphological operation to fill in the holes. Although the FOD detection system can detect the FOD of size 4 cm, a large number of cameras are required to cover the whole runway. As the cameras are installed very close to the runway, the detection range is very small, which lacks the foremost required objective of FOD detection system.

The Region-Based CNN for Foreign Object Debris Detection on Airfield Pavement [25] comprises of a vehicle on which four GT2050C cameras with 2048 pix- els (px) × 2048 px resolution, scanning for 5 meters in width at the same time are mounted. The GPS of the moving vehicle is utilised to identify the location of detected FOD. The detection of FOD is achieved by implementing the Convolution Neural Network (CNN). This CNN classifier contains two modules, Region Proposal Network (RPN) and Spatial Transformer Network (STN). Identifying FOD is done in two stages. In the first stage, the RPN is implemented by a dataset containing (2048 px × 2048 px) sampled size images of 3562 screws and 4202 stones. In the second stage, the STN is implemented to adjust the targets from the influence of scale, rotation and warping. Then adjusted images are analysed with CNN classifier to identify FOD. The proposed algorithm is implemented on a GTX 1080ti GPU, achieving 26 fps real-time sampling frequency to detect screws and stones with an accuracy of 97.67%. Employing such high computational algorithm is slightly challenging as well as using a vehicle to detect FOD requires additional permissions from airport management. Chapter 2. Survey of Related Works 7

There are certain research works, which are implemented to detect objects in a general case scenario. One of the research work is Multiple Object Detection using OpenCV on an Embedded Platform. The embedded platform used here is Texas Instrument DM3730 digital media processor, and the algorithm is based on cascade classifier. The dataset, which is used to train cascade classifier consists of 4000 positive images of dog, hand signs, plastic bottle, face etc. Finally, an XML file is generated, which contains data for the object to be detected. Implementation of the object detection algorithm was performed using C++ language in the OpenCV library, and the compilation was accomplished using GCC (GNU Compiler Collection). The execution time of cascade classifier for single object detection is 95 ms. However, this research work lacks the size of the object detected and the distance between the installed system and the object detected [26].

2.4 Summary

All the FOD detection systems can be classified depending on used technologies on two main classes, vision and radar technologies. The summary of research study is presented in Table 2.2.

Table 2.2: State-of-the art technologies & research works

Technology Research Works Radar systems Millimeter Wave (mmWave) [20]–[22]. Motion detection [23], [24]. Vision systems AI based solutions [25]. Object detection [26]. Chapter 3 Problem Statement, Objectives and Main Contributions

This chapter is dedicated to present the methodological approach of the thesis in- cluding Problem Statement, Thesis Objectives and Main Contributions. Problem statement section presents the complexity of composing a fit solution to overcome the problem of FOD. Thesis objectives section contains the objectives, which are required to design and develop a suitable solution for the system. Main contribu- tions section contains the individual contributions of both the authors regarding the designing and development of the system and report writing.

3.1 Problem Statement

The monitoring of airport runways is very challenging as FOD can be of any size. The system must perform FOD detection with high accuracy (efficiency to detect the presence of FOD), should work remotely, and it should cover the whole required detection area to ensure the safety of aircraft. The system must tackle the problem of detecting FOD even at a far distance, and the classification of the FOD is also necessary.

To achieve a best-fit solution to the problem of FOD detection, we perform a detailed study of related work, concerning vision-based FOD systems and radar technology. Many technologies have been implemented and tested at airports to provide an efficient solution. However, radar-based solutions require expensive sensors and additional permission, hence they are not available for small and medium-size airports. The vision-based solution is possible, however, it requires high computational capabilities of CPU/GPU and dedicated software. Instead of using high, expensive technologies, which required high computational power and complicated hardware structures, we wish to use techniques of image processing using computer vision. The thesis problem that we deal with is to propose a flexible cost-effective solution using off-the-shelf components allowing the FOD detection and classification in real-time.

8 Chapter 3. Problem Statement, Objectives and Main Contributions 9

3.2 Thesis Objectives

The first objective of this thesis is to compose a User-Driven Design (UDD) to define the ultimate requirements of the FOD detection system and constraints concerning the development of the system, which are necessitated for the system solution to overcome the problem of FOD.

The second objective is to establish a unique design for the development of a FOD detection system analysing all the possible technologies and algorithms. The system should be composed of economically available hardware components. Such devices may contain low computational power for which the implementation of the chosen algorithm must be done very meticulously. After determining the suitable technologies and algorithms, the feasibility of the system is assessed to achieve a fit solution.

The third objective is to implement and prototype the FOD detection system. The system should be installed at the desired location to evaluate the achievable ultimate goals of FOD detection system in the real-world. The system should be examined for detecting FOD of different sizes and for different distances. The outcomes of this evaluation defines the efficiency of the system.

3.3 Main Contributions

In this thesis, the fit solution design for implementing a FOD detection system dedicated to airports is achieved. Various possible technologies and algorithms are scrutinised and tested in real-world for achieving a proper solution for this system. The accomplishment of the requirements, objectives and performance of the system are examined after analysing the outcomes. The technologies and algorithms, which are capable of solving the problem of FOD are finalised for evaluation. A web interface is also created to notify the user in real-time when there is a FOD occurrence.

The division of work is satisfied with the weekly project updates to the su- pervisor. Mr Sarfaraz Ahmad Mahammad (Author1) performed the system design while Ms Sushma Vendrapu (Author2) has evaluated and revised the design process to acquire a more suitable and efficient FOD detection system. Author2 collected and modified the data, Author1 performed the training process of the FOD classification network. Author1 designed the web interface and prepared prototype of the FOD detection system for evaluating the system in real-world. Both Author1 and Author2 have conducted the implementation and testing of the system. In the report, Author1 wrote the Chapter 2, Chapter 4, Chapter 5 and Author2 modified those chapters again. While Author2 wrote the Chapter 1, Chapter 3. Problem Statement, Objectives and Main Contributions 10

Chapter 6, Chapter 7 and Author1 modified those chapters again. Chapter 3 was written and modified by both Author1 and Author2. Chapter 4 System Design and Modeling

This chapters aims to show design process along with its outcome in terms of system model, for which this chapter is divided into two sections such as System Design, which consists of designing of FOD detection system using UDD and System Model- ing, which represents aspects of FOD detection system for the development process.

4.1 System Design

Figure 4.1 presents the UDD, which helps to understand the requirements and preferences of stakeholders and future users in terms of system features, tasks and objectives [27]. The constructed UDD consists of requirement formulation in which the system solution specifications, objectives and limitations are formulated to overcome the problem of FOD and product development, which consists of suitable technologies and algorithms, utilised to design and develop the system for FOD detection in real-time.

While designing a system, limitations play a crucial role in defining efficiency. The main objective of the system is to monitor the runway constantly for the occurrence of FOD. As mentioned in the overview section of Chapter 2, FOD is of many types. For a feasible understanding, they can be divided into two types, such as living creatures and non-living creatures. The living creatures include birds, humans and mammals whereas non-living creatures include metals, personal belongings, cars and unknown objects such as broken parts of technology artifacts.

There are many technologies, which can be employed to detect FOD at air- ports. But as mentioned in the research works section of Chapter 2, they all comprise of high-cost and complicated technology, which requires maintenance. The problem of detecting the FOD is very challenging as they can be of any size. Precise requirements of both hardware and software technology are essential for the effective performance of the system. However, the main aim of this project is to provide a solution for cost-effective and simplistic model for detection of FOD. Therefore, Raspberry Pi and a camera module are utilised to develop the system.

11 Chapter 4. System Design and Modeling 12

Figure 4.1: Proposed design process for FOD detection system Chapter 4. System Design and Modeling 13

The feasibility of the system is evaluated for specified objectives and require- ments given by the users and the stakeholders. The users must also examine whether the currently available desirable solutions are capable of solving the problems addressed and whether the requirements are achievable or not. If there is a problem in achieving the prerequisites, the company stakeholders and end-users would demand to change the requirements in such a way to appease them. After an assessment, all the specified requirements can be accomplished, and product development can be established.

Generally, because of the assertion in trade-offs and variance of the required components and prerequisites, the selection of feasible technologies and methods such as algorithms must be borne deliberately in the consecutive steps:

1. Selection of technologies and algorithms,

2. Prototyping and modeling,

3. Validation of the solution.

Additionally, during the development phase of the system, the stakeholders and future-users are concerned, although it is the designer’s responsibility to drive the discussion between the contributors. The primary duty of the end-users and company stakeholders during the product development is to monitor if all the prerequisites and specified requirements are accomplished. After the validation of the components and limitations, the possible required developments can be asserted.

We have to carefully select technologies and algorithms considering the stake- holder’s problems and requirements. The suitable algorithms and techniques are taken by determining the issue of FOD at runways considering with far distance. The preferred technologies and tools must signify the price compatible after the algorithms are chosen for the particular device. There are various algorithms, through which the problem of FOD can be solved. However, users can be involved in the selection process for attaining different types of prerequisites with a different perception.

The modeling of the system depends on the provided requirements and de- sign process describing the way users want it to be. Moreover, it represents both hardware and software development processes precisely along with flowcharts of algorithms from the working technologies, functionalities, and constraints. Prototyping of the system shows the actual detection process and its outcomes for the given requirements. Using such a computer vision process, the system depends on software and hardware analysis, capturing the FOD in real-time and displaying the data in the web interface. Chapter 4. System Design and Modeling 14

The validation and evaluation shows the system capabilities. The stakehold- ers can check the exact outcome of the proposed requirements and technologies. If the results are the same as expected from the design process, the system is finalised and modified for additional objectives. Otherwise the designing process is repeated for attaining the requirements. The validation presents the complete evaluation part of the hardware and software system.

Technologies and algorithms related to itemized functionalities and constraints are presented in Table 4.1. The table consists of three sections such as functionalities, constraints and possible technologies. The functionalities are further divided into general and itemized. The general functionalities define the characteristics of the required functionality, whereas itemized specifies particular measures required to measure the functionality. Constraints depict the different items, through which the itemized functionality is determined and measured. Technologies and algorithms states, which solutions are suitable for a particular functionality. The selected technologies and algorithms for modeling the system from possible technologies and algorithms are bolded.

The main functionality of the system solution is the detection of FOD, which can be itemized as monitoring runways. The constraints defining this functionality is detecting moving and non-moving objects. The reliability of the system must be more than 95% and should perform in real-time with a performance of more than 0.5 fps. The plausible technologies and algorithms for both hardware and software are radar technology, beam patterns, CNN, RPN, computer vision process, electro-coupled devices, deep neural network.

The system solution also requires the functionality of classification of the de- tected FOD. The objects required for classification can be divided into two groups for easy understanding of users such as wildlife and other objects. The wildlife category can be further divided into constraints such as the small-sized objects with surface area (SA) more than 0.98 m2 like dogs, fox, sheep etc. And large size objects with SA higher than 1.9 m2 such as deer, moose etc. Other miscellaneous objects, which include broken parts of metal or plastic and so on. Where small miscellaneous objects with surface area (SA) should be more than 0.89 m2 such as metals, plastics etc. And large miscellaneous objects with SA must be greater than 5.6 m2 such as cars etc. The FOD classification can be obtained by using technologies and algorithms like vision system, hog, YOLO, MobileNet, SSD, R-CNN, haar-cascade, radar, lidar and beam patterns.

Detection feasibility specifies the algorithm performance functionalities ac- cording to the detection process. For the system to successfully detect the FOD, the object size must be greater than 0.5 m2, the frame rate achievable is less Chapter 4. System Design and Modeling 15

Table 4.1: Technologies and algorithms related to itemized functionalities and con- straints (selected technologies are bolded)

Possible technologies Functionalities Constraints and algorithms General Itemized Moving and Radar technology, Beam non-moving Patterns, CNN, RPN, Monitoring objects, Detection Computer vision, runways Reliability > 95%, Electro-coupled devices, Real-time Deep Neural Network performance. Small- object size (SA) > 0.98 m2 Vision system, Hog, eg. dogs. YOLO, MobileNet, Wildlife Large- SSD, R-CNN, object size Haar-Cascade Object (SA) > 1.9 m2 classification eg. deer. Small- object size (SA) > 0.01 m2 Radar technology, eg. metals. Lidar technology, Other objects Large- Computer vision, object size Beam patterns (SA) > 5.6 m2 eg. cars. Object- R-CNN, Background size > 0.5 m2, Detection Algorithm subtraction, Computer fps < 1, feasibility performance vision, Neural Network, Detection- Deep Learning, YOLO range < 250 m GSM, Wi-Fi, Secure, High Bluetooth, Communication Threat apprise reliability Web interface, Ethernet. Embedded platforms, Reliability, Antennas, System Performance Power supply, Camera Module, specifications Cost-effective- Apt to territory. IR sensors, ness Ultrasound. Chapter 4. System Design and Modeling 16 than the 1 fps, and detection-range should be up-to 250 m2. These constraints can be obtained by using these possible technologies and algorithms like R-CNN, background subtraction, computer vision, neural networks, deep learning, YOLO.

As the system will be installed in a remote location, the communication part is essential. The FOD detection system shows the status of detected FOD in the web interface, which covers the functionality of the communication section. It is itemized as a threat apprise, and the constraints required are secure and highly reliable in real-time as the system should be efficient for the safety of the aircraft. The possible technologies and algorithms are GSM, wi-fi, bluetooth, web interface and ethernet.

The system specifications are itemized for selecting the proper hardware technology for the implementation of the detection system. The device must be reliable, cost-efficient and with good performance for the prototyping vision system. The constraints, which affect the system specifications are the power supply and suitable region or area where the system will be installed.

4.2 System Modeling

This section is defined in two sub-sections such as Hardware Model and Software Model. Hardware Model section contains the required aspects of hardware com- ponents for the development of the system. Software Model section contains the functioning requirements of the algorithm for detecting the FOD.

4.2.1 Hardware Model As the second objective stated, the hardware components utilised for the develop- ment of the system must be economically available. The system should also work remotely with standard power supply, for which a standard and suitable embedded system must be appropriated for the development of the system. As the system is designed to work on the vision system, a camera module is also required. The required monitoring area, which is the airport runway is considerably large. So, a proper lens should also be utilised with camera module for precise monitoring of the area.

The block diagram presenting the general aspects of the FOD detection sys- tem model is shown in Figure 4.2. The selected technology, which is an embedded system is connected to the power supply and camera module. It should also be connected with either Wi-Fi or Ethernet for uploading the FOD detection results, as the system is working remotely. The algorithm must be uploaded to this embedded Chapter 4. System Design and Modeling 17 system for detection of FOD. And finally, the results from this algorithm are passed to the web interface for alerting the user.

Figure 4.2: Block diagram of FOD detection system model

4.2.2 Software Model As the airport runway is a static place where there is considered to be no movement of unnecessary objects, for such places constant frame by frame monitoring is fit. Subtraction of the foreground from the background is a significant pre-processing stage in many vision-based applications. Although there are several background subtraction techniques, which can be achieved using computer vision, proper selection is essential to satisfy every aspect in monitoring the airport such as lighting conditions, required monitoring area, size of the object to detect and performance.

With the help of the OpenCV library, we can analyze the image and perform different image processing techniques for our required prerequisites. As well as using this library, we can load trained models such as haar-cascade model [28], SSD model [29], YOLO model [30] etc for object classification. But such object classification models require devices with high computational power. The selection Chapter 4. System Design and Modeling 18 and implementation of the object classification model must be done accordingly to the chosen hardware components for a good result from the final FOD detection system. The simplified flowchart of the system is shown in the Figure 4.3. The system should first capture the video from the camera module. After capturing the video the system should detect the presence of FOD. If FOD is detected, the system should classify the FOD. after which the system alerts the user of FOD occurrence

Figure 4.3: Software modeling of the FOD detection system Chapter 5 System Implementation, Prototyping and Validation

This chapter presents the principle of system implementation, its prototype and val- idation scenarios. Implementation and Prototyping section describe the develop- ment and installation of the system in real-world at the desired location for testing the efficiency of the outcomes. Validation section contains the results obtained after prototyping of the system.

5.1 System Implementation and Prototyping

This section contains the Implementation and Prototyping of the whole system and is represented as Hardware Implementation and Prototyping, which is selections of components, connections and installation of the system at the desired location for FOD detection. This section also contains Software Implementation and Proto- typing, which is the implementation of coding part with selected technologies and algorithms for FOD detection system as well as web interface to notify the user.

5.1.1 Hardware Implementation and Prototype The components required for implementation of this system are Raspberry Pi 3 Model B+ and a Sony IMX219 camera module. As mentioned in the first objective, the system requires to work remotely, and it should be able to work with the desired chosen algorithm for object detection efficiently. Raspberry Pi 3 Model B+, which is a single-board computer suffices for designing our system as it allows for rapid pro- totyping of vision based algorithms. It is frequently used in research projects. The single-board computer Raspberry Pi has been standard, low in cost, compact design and wireless connectivity. It contains Broadcom BCM2837B0, Cortex-A53 (ARMv8) 64-bit SoC and 1 GB LPDDR2 SDRAM, which is satisfactory to perform our selected algorithms. It also accommodates 2.4 GHz and 5 GHz IEEE 802.11.b/g/n/ac wire- less LAN to upload the values to the web interface from time to time. The selected camera module will be connected to CSI camera port of Raspberry Pi 3 Model B+ [31].

19 Chapter 5. System Implementation, Prototyping and Validation 20

For selecting the camera module, the factors considered are focal length and reso- lution. As the system should also be able to classify the detected objects, these two factors are significant.

Focal Length× FoV = h × Working Distance and (5.1)

1 Focal Length ∝ (5.2) FoV

Equation 5.1 can be found in [32], where h is the horizontal sensor dimension (horizontal pixel count, multiplied by pixel size). As mentioned in Equation (5.2), with the increase in the focal length the Field of View (FoV) decreases, which results in an increase of object size in pixels [33]. The selected camera module is Sony IMX219, which consists of focal length 25 mm along with a CS-mount lens, which also allows the system to detect the moment of the objects around the runway as well with a field of view of 16°. The selected camera, which is CMOS 8-megapixel colour camera can record videos with a resolution of 1080 px along with a speed of 30 fps [34]. The camera module should be connected to Raspberry Pi using the standard 15 pin to 15 pin Pi camera cable (FFC) [35], as shown in Figure 5.1.

Figure 5.1: Connection of Raspberry Pi and camera module [35]

Before prototyping the system the connections between selected technologies are tested, which is shown in Figure 5.2. Raspberry Pi 3 Model B+ is connected with Sony IMX219 camera module with the standard 15-pin to 15-pin Pi camera cable (FFC). The camera is fixed firmly over a supportive card-board structure for testing purpose, and the Raspberry Pi is located below the camera module, which is shown in Figure 5.3 and Figure 5.4. Raspberry Pi is connected to a power adapter, which provides 5 V and 2.5 A power supply. And the Raspberry Pi is controlled using the VNC server on a laptop remotely. Chapter 5. System Implementation, Prototyping and Validation 21

Figure 5.2: Testing Raspberry Pi and camera module

Figure 5.3: System prototype for FOD detection (front view)

Figure 5.4: System prototype for FOD detection (back view) Chapter 5. System Implementation, Prototyping and Validation 22

5.1.2 Software Implementation and Prototype The coding part of the algorithm for FOD detection as well as the classification of detected FOD is implemented in Python 3.8.0 using libraries OpenCV, Imutils, Numpy and Datetime. These libraries provide convenience functions to perform basic image processing techniques and a collection of high-level mathematical functions.

For detecting the presence of FOD, background subtraction technique based on Gaussian distribution, which is BackgroundSubtractorMOG2 algorithm pro- vided by OpenCV is utilised for implementation of this system. It is an algorithm based on the Gaussian mixture applied to the background and foreground segmen- tation. Although there are many background subtraction algorithms, this algorithm is selected as it is advanced in many aspects and its performance stability, even when there are lighting contingencies. One essential aspect of this algorithm is the acquisition of the correct number of Gaussian distributions for each pixel over the entire frame. Another important aspect of this algorithm is improved identification of shadows in frames. Multiple objects can be identified, which are distinct from foreground and background [36], [37].

The classification of detected FOD is achieved by implementing the Single Shot Multi-box Detector (SSD), which performs object classification for real-time applications. As the name implies, SSD creates multiple boxes of different sizes and aspect ratios on various objects present in the input image. At the time of estimation, the network produces outcomes for each type of object in each box. Then it allows changes to the box to best suit the form of the object [29]. SSD requires just one shot to identify several items in a picture using a multi-box. SSD is considerably quicker in the processing speed and high-precision target detection algorithm. For FOD classification, SSD is chosen due to its efficient performance and speed when compared to other classification algorithms. The performance of such algorithms is analysed by mean Average Precision (mAP), which is the average precision value for recall value over 0% to 100%, where precision is the correctness of the prediction results and recall is the how significant the positive detections are obtained. When performing different detection algorithms on the VOC2007 dataset, the results are SSD achieves 77.2% mAP for the input resolution of 300 px × 300 px at 46 fps. Whereas for an input resolution of 512 px × 512 px, SSD achieves 79.8% mAP at 19 fps. Considering these results, SSD outperforms state of the art Faster R-CNN, which can achieve 73.2% at 7 fps and YOLO, which can achieve 63.4% at 4 fps [29]. When considering real-time applications as in our problem, accuracy and FPS speed, which are measures of the system performance, are vital, and for which SSD is much suitable for our objective.

The entire process for the implementation of FOD detection system consists Chapter 5. System Implementation, Prototyping and Validation 23 of three phases. In phase one, the process explains identifying a foreign object. In phase two, the process classifies the objects recognised in the previous step. Phase three consists of assembling a dedicated web interface. The flowchart of the program for detecting FOD is shown in Figure 5.5.

Figure 5.5: System flowchart of detecting, classifing and notifying the presence of FOD

The main objective of our system is to identify the presence of FOD. FOD may include broken parts and many other anonymous objects, which cannot be classified. The most reliable method to distinguish these FOD is by continuous frame by frame monitoring of the region. Background subtraction algorithm based on Gaussian distribution is highly suitable to overcome this type of problem in the FOD detection system. The complete process of recognizing FOD using background subtraction based on Mixture of Gaussians (MoG) algorithm is illustrated in the Figure 5.6.

The system first captures the video. Each frame from the captured video is passed through the background subtraction algorithm one after the other. Back- ground subtraction algorithm takes the source pixels from the captured frame and assigns each pixel a specified Gaussian distribution. The time for which the object remain in the scene is the weight of this distribution. This algorithm generates the foreground mask of the captured frame by assigning Gaussian weights to each pixel. When the system captures another frame, this foreground mask updates the background mask. Moreover, this process continues as the frames progressed continuously [38]. Chapter 5. System Implementation, Prototyping and Validation 24

Figure 5.6: Flowchart for background subtraction on the system to detect FOD

So now, the frame contains different Gaussian values for different pixels. This frame is thresholded such that noise is excluded, pixels with too low or too high values are filtered out. While thresholding of the image, the same threshold value is applied to each pixel. When the pixel value is smaller than the assigned mark, it will be set to zero otherwise, it will be set to a maximum. Finally, a binary image is obtained after thresholding the frame [39], [40].

The resulted image from the previous step is noise-free, now dilation of the frame is done to fill the holes. By performing dilation to the image, it extends a pixel neighborhood over which the source image takes the maximum value of the element via the specified structuring element. Dilation is effective in combining the splitting elements as we need to obtain the whole structure of moving object. This process is performed multiple times to fill the holes accurately [41]–[43].

Consequently, after identifying the difference in binary values from background to foreground, a bounding box should be constructed over those pixels. OpenCV’s contours function is a useful tool to analyze shapes and detect and recognize objects. Contours function is applied to connect the curve that connects all the continuous points (along the boundary) with the same binary value in dilated and thresholded binary image [44].

The final output consists of a rectangular box around the moving object present in the frame. Thus identification of moving objects is achievable by this selected algorithm, which results in achieving part of the first objective.

Therefore, the system can detect the presence of FOD now, the system must be able to classify them, which is presented in phase two. Here we will be using Caffe [45] framework, which is a deep learning framework to train a SSD model for our required FOD classification data-set. Caffe is inscribed in C++ and has bindings of Python and Matlab. Chapter 5. System Implementation, Prototyping and Validation 25

There are five steps involved in training an SSD model with Caffe framework.

• Step 1 - Caffe framework: Installing the Caffe framework suitable for the sys- tem along with required dependencies. It also requires several prerequisites before installation. The official documentation is provided for a clean instal- lation of this framework by BAIR [46].

• Step 2 - Data preparation: Now preparing the dataset for object identification. This step involves collecting suitable images of FOD for the dataset. After this process, this step requires generating lmdb files and solver definition.

• Step 3 - Model training: This step involves fetching the lmdb files of the cus- tom dataset to the Caffe framework. A pretrained weight is used to train the model.

• Step 4 - Model deploying: After training, the model is deployed in order to work with OpenCV’s deep neural network.

• Step 5 - Implementation on Raspberry Pi: This is the most crucial part. As Raspberry Pi contains limited memory and processing speed, the implemen- tation process may affect the final performance. To improve the parallel per- formance of both algorithms, it is implemented on Deep Neural Network of OpenCV.

The above-explained steps are here further described briefly on how to train and deploy an SSD model for FOD classification.

Stage 1: The main dependencies by which we are going to train the model with the Caffe framework are Nvidia CUDA Toolkit and Nvidia cuDNN. These dependencies are used to speed up the training process with more efficiency. The installation procedure is followed by official documentation from BAIR [45].

Stage 2: The collected data-set primarily consists of animals as they pose a huge threat to the safety of aircraft during landing or takeoff procedures. The factors considered during the collection process are the animals that can have access to airports due to improper security fencing such as birds and mammals, which are abundant at various places. Some of the animals are considered based on previously occurred events regarding FOD at airports. The birds, which are included in the data-set to be detected by the system are: • Cranes • Herons

• Eagles • Pelicans

• Hawks • Vultures Chapter 5. System Implementation, Prototyping and Validation 26

The mammals, which can be found abundantly in some areas are included in this data-set including the mammals, which resulted in the occurrence of previous FOD incidents and they are: • Alligator • Kangaroo

• Deer • Monkey

• Dogs • Moose

•Fox • Raccoon

• Horse • Sheep The system is also capable of classifying objects such as cars and human be- ings. The whole dataset consists of 7612 images, where each object holds at least 400 images. Now LMDB files are generated using tools provided by the framework for our custom dataset. The LMDB file is a key-value repository database used by the Caffe framework. Some of the most advantages of this strategy are its strong efficiency. Training and testing datasets are translated to a form stored in the LMDB file.

Stage 3: The model is then trained by weights evaluated by the SSD imple- mentation in Caffe [29]. After the training process is completed, the outputs will be the Caffe model and solver state, which are further used to deploy the model.

Stage 4: After the model is trained, it is deployed with the help of previ- ously generated solver state files and provided pre-trained weights by which it is trained. The DNN of OpenCV can now load this model, which is a more manageable way of implementing this program on Raspberry Pi.

Stage 5: The implementation of the SSD model on Raspberry Pi is achieved by using DNN of OpenCV. The steps followed in this solution are shown in Figure 5.7.

Now reading and initializing of the network is done with DNN of OpenCV library. The DNN then returns a class capable of creating and handling the comprehensive artificial neural network. The extracted frames from the video capture are then pre-processed by specific image processing techniques so that it can be passed through this neural network [47].

An 4-D picture blob is created from the input frame so that it can be passed through the loaded network. The input image is performed with resizing and cropping from the centre, then deducting mean values and scaling values by scale element. Now the blob is passed through loaded Caffe model. As the frame is fetched through the neural network, the forward class function is used to obtain Chapter 5. System Implementation, Prototyping and Validation 27

Figure 5.7: Flowchart of FOD classification on the system the detection names and their prediction levels [47], [48].

As the prediction levels varies and weak detections can result in false classi- fication, the weak detections are filtered out. Now the index values of high detection rates are extracted from the frame, for drawing a bounding box around classified object.

The implementation of the coding part should consist of both background subtraction algorithm from phase one for FOD detection and SSD model from phase two for FOD classification and it should continuously monitor the area. In the first step of coding part, the Caffe model and prototxt files generated from the training process from phase two, are loaded using DNN, which are used for classification of detected FOD. Different types of classes (objects) for which the network is trained should be mentioned after loading the model. Now the system is allowed to capture the video of the required area, after which a loop is initiated in coding part to detect the presence of FOD, which runs repeatedly. In this loop, after capturing the frame from the recorded video, it is applied with the background subtraction Mixture-of-Gaussians (MoG) algorithm and the whole process of modifying the frame proceeds. After attaining the contours, the objects with the contour area less than 400 px is dropped, which is 20 px × 20 px size object. The algorithm is raising false detection results, for the pixel area of less than 400 px, for which it is excluded. As the presence of the FOD is identified, the respective frame is passed through the definition of Caffe model network to classify the FOD. If the algorithm can classify the FOD, the predicted result and bounding box are displayed over the detected FOD or else a simple bounding is displayed over the FOD. FPS, the successive frames through which the system is monitoring the area is also calculated after each loop to define the performance of the system in the real world. The implemented coding part for FOD detection and classification is shown in Appendix A 7.2 and Chapter 5. System Implementation, Prototyping and Validation 28 a GitHub link is provided as well for the required files for implementing this system.

Now both the algorithms are ready to be deployed to Raspberry Pi and in- stalled at the required location. Since the Raspberry Pi should work remotely, a web interface is mandatory to notify the users. As the program is implemented on a Python script, it will be easy to send the specifics to the web interface if it is running on Python as well. The web interface is based upon Flask, which is a Python-based micro web framework. A function is being called by the server from the FOD detection algorithm, which contains attributes of detected objects.

Figure 5.8: Flowchart of web interface on the system

The flowchart of Flask server on the FOD detection system is shown in Fig- ure 5.8. To notify the user using web interface, the meta refresh method is used. As the responsiveness of notification is also an important factor, the simplistic and suitable method is used. The detected objects are written into a text file along with occurrence time, which can be accessed as a database. Therefore, users can have a broader view of FOD occurrence for taking preventive measures.

Web interface is also written in Python 3.8.0 using library Flask. For creat- ing Flask dependent web server, templates are required to display the data on web interface. These templates are written in HTML along with a CSS file, which is used for styling the elements of the web page. One web page is written to display the normal response from the system (when there is no occurrence of FOD) as shown in Figure 5.9. The other web page is written to display the caution message when there is an occurrence of FOD, as shown in Figure 5.10. A common file, which is also accessed by FOD detection algorithm is being reviewed repeatedly by the server for raising this caution message. This file consists of detected FOD data, assists the system to raise caution message when there is a change in this file.

One additional web page is also created, which displays the data of occurred FOD along with date and time. The FOD detection and classification algorithm records the data in the text file, which is then accessed by the web server to display Chapter 5. System Implementation, Prototyping and Validation 29 the data on web interface. The web page, which shows the data of occurred FOD is shown in Figure 5.11. The coding part for the templates (web pages) written in HTML and a CSS file are shown in Appendix B 7.2.

Figure 5.9: Web interface display during no FOD occurrence

Figure 5.10: Web interface response during FOD occurrence Chapter 5. System Implementation, Prototyping and Validation 30

Figure 5.11: Web page displaying the data of occurred FOD

5.2 Validation

As the system is developed for the identification and recognition of FOD, an evalua- tion of the system is crucial to prove the efficiency of the system. The performance of the system is evaluated mainly based on following measures:

• Frame per second (fps): Efficiency of the system to capture and analyse frames of the required area for detecting the presence of FOD;

• Object identification rate: System efficiency to identify the presence of FOD;

• Object classification rate: System efficiency to classify the detected FOD;

• Range of correct classification distance.

The system was installed at an appropriate location for evaluation purpose. For evaluation purpose, the selected FOD is a person and a car. The pixel area covering the person is equivalent to the pixel area covering the animals such as deer, sheep, kangaroo etc. The pixel area covering the car is equivalent to the pixel area covering the animals such as moose, horse etc. The system is evaluated with these FOD for different distances such as 10 m, 20 m, 25 m, 50 m, 75 m, 100 m etc. The efficiency of the system is also measured for detection of FOD. It is obtained by performing different tests with a constant time and analysing the successive detection of FOD in frames and comparing it with the total number of frames captured. The system is only able to classify the FOD at a distance of 20 meters. Chapter 5. System Implementation, Prototyping and Validation 31

Although the system can detect the presence of FOD for a distance of 200 meters effectively.

The detection of the FOD at a distance of 25 m is shown in Figure 5.12. The FOD considered here is a person. The background image with no FOD and current captured frame with a FOD is shown in the first row. The captured frames are processed with Gaussian distribution and artifacts filtering such as thresholding and dilation of the image, which is shown in the second row. Now both the frames are subtracted from each other for detection of the FOD. Finally, a bounding box is drawn around the FOD. The enhanced frame of the detected FOD is shown at the end. The performed step by step process of detecting FOD (person) with background subtraction and processing of the image is implemented for different distances, and it is presented in Table 5.1.

Table 5.2 presents the efficiency of the system to detect the presence of FOD (person) at different distances and their respective size in px. The performance of the system is measured by determining the FPS rate. Table 5.3 presents the efficiency of the system to classify the detected FOD (person) for different distances and their respective size in pixels. The total size of the captured frames is 1280 px × 720 px.

The FOD detection system can detect any number of objects, which are dis- tinct from background to foreground. But the system can only classify up to four objects in each frame. Figure 5.13 presents the detection of multiple moving objects as well as their classification. As the most of the objects are far away from the camera the system is unable to classify them.

The experiments are also conducted on detecting big FOD, which is a car. These experiments are also repeated for different distances, and it is presented in Table 5.4. The efficiency of the system for detection is increased as the object size area in the pixel is increased. The system can detect the presence of a car for a distance of up to 180 meters. However, the system can classify the car for up to a distance of 30 meters effectively. Table 5.5 presents the efficiency of the system to detect the presence of car. Chapter 5. System Implementation, Prototyping and Validation 32

Figure 5.12: Detection of FOD (person) at a distance of 25 meters Chapter 5. System Implementation, Prototyping and Validation 33

Table 5.1: Detection and classification of FOD (person) for different distances

Enhanced Distance Captured FOD S.No FOD from (in meters) Frame Classified Frame

1 15 Yes

2 25 No

3 50 No

4 75 No

5 100 No

Table 5.2: System efficiency for FOD (person) detection

Evaluation Average Distance Average Detection S.No time FOD size (in meters) FPS efficiency (in minutes) (in px) 1 25 2 150 ×100 0.98 100% 2 50 2 90 ×55 1.03 100% 3 75 5 60 ×28 0.88 84% 4 100 5 45 ×25 0.91 48% Chapter 5. System Implementation, Prototyping and Validation 34

Table 5.3: System efficiency to classify the detected FOD (person)

Evaluation Average Distance Average Classification S.No time FOD size (in meters) FPS efficiency (in minutes) (in px) 1 10 3 350 ×150 1.03 100% 2 15 3 280 ×130 0.97 72% 3 20 3 210 ×110 1.01 9%

Figure 5.13: Detection and classification of multiple moving objects Chapter 5. System Implementation, Prototyping and Validation 35

Table 5.4: Detection and classification of FOD (car) for different distances

Enhanced Distance Captured FOD S.No FOD from (in meters) Frame Classified Frame

1 40 Yes

2 50 No

3 75 No

4 100 No

5 130 No

6 180 No

Table 5.5: System efficiency for FOD (car) detection

Average Distance Average Detection S.No FOD size (in meters) FPS efficiency (in px) 1 50 110 ×70 0.91 100% 2 75 80 ×50 0.97 100% 3 100 50 ×45 0.96 81% 4 150 35 ×30 0.89 60% 5 180 25 ×20 1.01 43% Chapter 6 Discussion

In Chapter 1, we discussed the problem of FOD faced at several airports, mainly on runways. We approach this method by knowing that small airports are facing many difficulties regarding the cost and maintenance of a FOD detection system to overcome the problem of FOD. Users require low-cost devices with a high accuracy approach that is useful for detection of FOD. To overcome the problem of FOD, we describe in survey-related works of many complicated technologies comprising of both software and hardware. For designing such a efficient FOD detection system, there is a requirement of both hardware and software technologies. In hardware technology, a low-cost device is necessary for designing the system so that the FOD detection system is accessible for small and medium-sized airports. In software amongst all the algorithms, we choose four algorithms in which only two are feasible, and the remaining two did not work for the implementation.

For composing such a cost-efficient and high-performance system for FOD detection, designing and implementation of the system should be performed pre- cisely, for which we have used UDD. The selection of Raspberry Pi 3 Model B+ was finalised due to its rapid prototyping of vision-based applications as well as it is of affordable price. In the design process, we scrutinised multiple algorithms suitable for FOD detection in real-time. The selection of the algorithm also effects on the performance of the selected hardware technology. We have discussed many algo- rithms and their functionalities in which only a few are conceivable. We learned that some algorithms are very high-efficient, but not fit for this project, as they are of high complexity, which is not possible for working on our hardware technology. The supervisors have approved the system design solution and continued the project towards implementation. The required system specifications defining the performance are presented in the Table 4.1-Technologies and algorithms related to itemized functionalities and constraints.

Though there are different sophisticated algorithms and technologies in both software and hardware, as seen in Chapter 2, we have selected the algorithms such as haar-cascade, YOLO, background subtraction, and SSD. The purpose of choosing these algorithms is due to their efficient performance in the detection

36 Chapter 6. Discussion 37 process and for which we evaluated these algorithms for our requirements as well as usability problems that will occur.

For designing and modeling of the system, efficient algorithms that detect FOD irrespective of their size is required. The airport runway is an idle area where there is no allowance of unnecessary objects. For such reason, to monitor the presence of FOD, we selected the background subtraction based on Gaussian distribution. Constantly frame by frame monitoring of the required region is adequate for detecting the presence of FOD. SSD model is also implemented in our chosen algorithm for classification of FOD. To train the SSD model, images of both living and non-living objects for FOD classification are required. The collected images of total 7612, contain animals such as deer, dogs, fox, horse, monkey, moose, raccoon, sheep and kangaroo including birds such as cranes, eagles, hawks, herons, pelicans and vultures. Humans and cars are also included in the collected images.

Accordingly, we compose efficient algorithms based on the background sub- traction (MoG) in OpenCV for detecting the presence of FOD and Single Shot MultiBox Detector (SSD) for the classification process of FOD. The implemented algorithms are by the computer vision system, which is fast and accurate and can also perform complex tasks like identifying multiple objects and classify them. The chosen algorithm, which is background subtraction uses less computational power, which suffices for the most reliable performance of Raspberry Pi. And further this algorithm is combined, with working of SSD model for classification of FOD.

In the validation section of Chapter 5, we briefed the results obtained by the system. The FOD can be detected within the range of 180 meters. The results have been performed for the detection of moving objects that differentiate between 25 meters to 180 meters. By increasing the distance, the ability to detect the FOD decreases as the object size in pixel decreases. For evaluating the system, the FOD selected is a person and a car. The classification of the FOD is possible only for

Table 6.1: System efficiency to detect FOD

S.No Detected FOD Detection range Efficiency up to 50 meters 100% 1 Person 50 to 75 meters 84% 75 to 100 meters 48% up to 75 meters 100% 75 to 100 meters 81% 2 Car 100 to 150 meters 60% 150 to 180 meters 43% Chapter 6. Discussion 38 a distance of 15 meters for medium-sized objects (persons) and 30 meters for large-sized objects (cars) effectively. Considering a person covers the pixel area identical to medium-sized animals such as deer, sheep etc. Similarly considering a car covers the pixel area identical to large-sized animals such as moose, monkey etc. The pixel area covered by these FOD is also shown in the validation section. Table 6.1 presents the efficiency of the system for detecting different sized FOD for different distances.

Figure 6.1 shows the graphical representation of the efficiency evaluation of FOD detection and classification of a person for different distance. For a range of 15 meters, the system can detect and classify the FOD. As the range within the system and FOD increases, the pixel area covering the FOD decreases gradually. Consequently, the ability of the system to classify and detect FOD also decreases. For a range of 20 meters, the system cannot classify FOD completely, though the system can detect the presence of FOD. The system can detect the FOD presence for a distance of 100 meters successfully. However, for a range of 125 meters, the system cannot detect the presence of medium-sized FOD anymore.

Figure 6.1: Graphical representation of system efficiency for detecting and classi- fying medium size FOD - a person

Figure 6.2 shows the graphical representation of the efficiency evaluation of FOD detection and classification of a car for different distances. As the area of the pixel of the car is larger, the classification and detection range is also higher. For a range of 30 meters, the system can classify and detect the FOD. For a distance of 50 meters, the system cannot classify the FOD effectively. The system can detect the presence of FOD efficiently up to a range of 180 meters. However, for a range of 210 meters, the system cannot detect the presence of FOD anymore.

For such systems, web interface is essential for alerting users in real-time re- garding the occurrence of FOD. The web interface is based on Flask and comprised of simple functions for not affecting the performance of the system. The web Chapter 6. Discussion 39

Figure 6.2: Graphical representation of system efficiency for detecting and classi- fying large size FOD-acar interface is designed to display the caution message when there is an occurrence of FOD in real-time. A database is also implemented, which stores the data of previously occurred FOD along with time and date. Chapter 7 Conclusions and Future Works

7.1 Conclusions

The detection of FOD is necessary for ensuring the safety of aircraft and orderly functioning of the airport. Although there are many detection systems to tackle the problem of FOD, well-defined solution design is required to achieve a low-cost and efficient device with accurate detection results. To fulfil user criteria, specifications and limitations using UDD, we have constructed a requirement formulation and objectives for FOD detection system. The solution contains both hardware and software system modeling, which is achieved through the UDD and functionalities table.

We proposed the design process of a FOD detection system intended for air- port runways use. The presented process approaches the problems at small and medium airports from not only our perspective but also considering the shareholder’s and future users’ requirements, the design procedure considers the various technologies with its functionalities and constraints and even cost.

The proposed system design is verified with real-time synopsis to provide an inexpensive device with proper algorithms with good performance for the de- tection process. The complete functioning of the system is also presented. The proposed solution for FOD detection can detect and classify the FOD. The results obtained after testing for the efficiency of the system for detecting FOD such as person and car are presented in the Table 7.1.

The designing and implementation of the system are based on both hard- ware and software. However, the software part primarily takes place in the FOD detection and classification process. The object detection and its classification are based on the computer vision process by OpenCV. The algorithms used for this process are SSD model and background subtraction, which are reliable for the device.

The hardware parts utilised are Raspberry Pi and the camera module for which the

40 Chapter 7. Conclusions and Future Works 41

Table 7.1: Efficiency of the system to detect FOD for different ranges

S.No Detected FOD Detection range Efficiency up to 50 meters 100% 1 Person 50 to 75 meters 84% 75 to 100 meters 48% up to 75 meters 100% 75 to 100 meters 81% 2 Car 100 to 150 meters 60% 150 to 180 meters 43% algorithms are relevant and accessible to this hardware. The validation part shows the object detection outcomes for the given algorithms that would detect up to 180 meters. The object detection proves the detection of moving objects, and if any object appears, it will notify the user with a warning message using a web interface.

Using UDD, we have designed the object detection process, by the imple- mentation of the algorithms the detection process is working efficiently. The combination of hardware and software provides better functioning of the device. Application of web interface shows the warning message of FOD occurrence.

7.2 Future Works

For future work, we would like to implement improved algorithms for more accu- rate detections even for longer distances. The functioning of selected algorithms can also be improved, such as optimization of coding, increasing FPS to make it more reliable, implementation of a more secure database, providing new classes for the object classification model, applying advanced background subtraction technique etc. Moreover, we would like to use the different high performance embedded systems and other sensors that can contribute to the improvement of the camera module. High cost-effective devices can implement algorithms with high performance to detect objects for the longer distance. With the addition of stepper motors, the camera can be rotated around the surroundings to monitor the whole area. Moreover, on the other hand, we can improve this work by adding an alarm or warning signal that makes an audible signal when an object is detected at an inappropriate location. For the advancement of the system, we can improve it by adding the feasible GPS module to detect/find the exact location of the FOD. For better use, we can create a mobile application for giving access to the users so, that they can be notified whenever a FOD is detected.

The proposed system is for detecting the presence of FOD, particularly on airport runways. However, we would like to improvise this system for monitoring Chapter 7. Conclusions and Future Works 42 taxiways and apron areas by using advanced algorithms and improved camera module. In such cases, The algorithm should exclude airport management vehicles, which can be seen in those areas. The system can also be implemented to identify traffic on roads, threat objects at military areas and other secure locations. The system can be improvised for usage at shopping complexes to count the number of objects present in shelves. References

[1] Y. Zhongda, L. Mingguang, and C. Xiuquan, “Research and implementation of fod detector for airport runway,” in IOP Conference Series: Earth and Envi- ronmental Science, IOP Publishing, vol. 304, 2019, p. 032 050. [2] N. Rajamurugu, P. Karthikeyan, K. Ajithkumar, A. I. Hussain, and V. Vimal- praksh, “A study of foreign object damage (fod) and prevention method at the airport and aircraft maintenance area,” [3] Foreign Object Debris, https : / / www . . com / commercial / aeromagazine/aero_01/textonly/s01txt.html, [Online; accessed 26-April-2020]. [4] Wikipedia contributors, Air france flight 4590 — Wikipedia, the free encyclo- pedia, [Online; accessed 19-May-2020], 2020. [Online]. Available: https: //en.wikipedia.org/w/index.php?title=Air_France_Flight_ 4590&oldid=956547803. [5] E. K. KCAW, How a deer can cause a plane crash, KTOO, Library Catalog: www.ktoo.org Section: Southeast, Feb. 2, 2016. [Online]. Available: https: //www.ktoo.org/2016/02/01/124024/ (visited on 05/05/2020). [6] “Flight canceled after plane crashes into kangaroo,” usatoday, Aug. 3, 2015, Library Catalog: eu.usatoday.com. [Online]. Available: https : / / www . usatoday . com / story / travel / roadwarriorvoices / 2015 / 08 / 03 / flight - canceled - after - plane - crashes - into - kangaroo / 83842284/ (visited on 05/05/2020). [7] “FAA Wildlife Strike Database,” [Online]. Available: https://wildlife. faa.gov/home (visited on 05/12/2020). [8] “Foreign Object Debris Introduction,” en-US, ReadyMax, Jan. 2020, Library Catalog: www.readymax.com Section: Blog. [Online]. Available: https:// www.readymax.com/foreign-object-debris-introduction/ (vis- ited on 05/12/2020). [9] J. Patterson Jr, “Foreign object debris (fod) detection research,” International Airport Review, vol. 11, no. 2, pp. 22–7, 2008. [10] K. M. D. Chew, Method and system for rail track scanning and foreign object detection, US Patent 7,999,848, Aug. 2011.

43 REFERENCES 44

[11] L. Jiang, G. Peng, B. Xu, Y. Lu, and W. Wang, “Foreign object recognition technology for port transportation channel based on automatic image recog- nition,” EURASIP Journal on Image and Video Processing, vol. 2018, no. 1, p. 147, 2018. [12] L. Xiang, Z. Zhu, J. Tian, and Y. Tian, “Foreign object detection in a wire- less power transfer system using symmetrical coil sets,” IEEE Access, vol. 7, pp. 44 622–44 631, 2019. [13] Z. Xue, S. Candemir, S. Antani, L. R. Long, S. Jaeger, D. Demner-Fushman, and G. R. Thoma, “Foreign object detection in chest x-rays,” in 2015 IEEE in- ternational conference on bioinformatics and biomedicine (BIBM), IEEE, 2015, pp. 956–961. [14] AC 150/5210-24 - Airport Foreign Object Debris (FOD) Management., https://www.faa.gov/airports / resources / advisory _ circulars/index.cfm/go/document.current/documentNumber/ 150_5210-24, [Online; accessed 26-April-2020]. [15] R. Hussin, N. Ismail, and S. Mustapa, “A study of foreign object damage (fod) and prevention method at the airport and aircraft maintenance area,” in IOP Conference Series: Materials Science and Engineering, IOP Publishing, vol. 152, 2016, p. 012 038. [16] Wikipedia contributors, Us airways flight 1549 — Wikipedia, the free encyclo- pedia, [Online; accessed 26-April-2020 ], 2020. [Online]. Available: https: //en.wikipedia.org/w/index.php?title=US_Airways_Flight_ 1549&oldid=952463614. [17] Concorde crash in 2000, https://www.bbc.com/news/world-europe- 11923556, [Online; accessed 26-April-2020]. [18] “Bioseco - solutions for bird protection,” [Online]. Available: http://www. bioseco.com/ (visited on 06/26/2020). [19] “Innovative system for preventing bird-aircraft collisions | Bioseco - solu- tions for bird protection,” [Online]. Available: http://bioseco.com/ projects (visited on 06/25/2020). [20] E. Yigit, S. Demirci, A. Unal, C. Ozdemir, and A. Vertiy, “Millimeter-wave ground-based synthetic aperture radar imaging for foreign object debris de- tection: Experimental studies at short ranges,” Journal of Infrared, Millimeter, and Terahertz Waves, vol. 33, no. 12, pp. 1227–1238, 2012. [21] K. Mazouni, A. Zeitler, J. Lanteri, C. Pichot, J.-Y. Dauvignac, C. Migliaccio, N. Yonemoto, A. Kohmura, and S. Futatsumori, “76.5 ghz millimeter-wave radar for foreign objects debris detection on airport runways,” International Journal of Microwave and Wireless Technologies, vol. 4, no. 3, pp. 317–326, 2012. REFERENCES 45

[22] F. Nsengiyumva, C. Pichot, I. Aliferis, J. Lanteri, and C. Migliaccio, “Millimeter-wave imaging of foreign object debris (fod) based on two- dimensional approach,” in 2015 IEEE Conference on Antenna Measurements & Applications (CAMA), IEEE, 2015, pp. 1–4. [23] X. Qunyu, N. Huansheng, and C. Weishi, “Video-based foreign object de- bris detection,” in 2009 IEEE International Workshop on Imaging Systems and Techniques, IEEE, 2009, pp. 119–122. [24] W. Chen, Q. Xu, H. Ning, T. Wang, and J. Li, “Foreign object debris surveil- lance network for runway security,” Aircraft Engineering and Aerospace Tech- nology, 2011. [25] X. Cao, P. Wang, C. Meng, X. Bai, G. Gong, M. Liu, and J. Qi, “Region based cnn for foreign object debris detection on airfield pavement,” Sensors, vol. 18, no. 3, p. 737, 2018. [26] S. Guennouni, A. Ahaitouf, and A. Mansouri, “Multiple object detection using opencv on an embedded platform,” in 2014 Third IEEE International Collo- quium in Information Science and Technology (CIST), IEEE, 2014, pp. 374– 377. [27] C. Abras, D. Maloney-Krichmar, J. Preece, et al., “User-centered design,” Bainbridge, W. Encyclopedia of Human-Computer Interaction. Thousand Oaks: Sage Publications, vol. 37, no. 4, pp. 445–456, 2004. [28] “OpenCV: Cascade classifier,” [Online]. Available: https : / / docs . opencv.org/3.4/db/d28/tutorial_cascade_classifier.html (visited on 05/05/2020). [29] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in ECCV, 2016. [30] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv, 2018. [31] “Buy a raspberry pi 3 model b+ – raspberry pi,” Library Catalog: www.raspberrypi.org. [Online]. Available: https://www.raspberrypi. org / products / raspberry - pi - 3 - model - b - plus/ (visited on 06/06/2020). [32] Imaging Resource Guide, Understanding focal length and field of view ed- mund optics, Library Catalog: www.edmundoptics.com. [Online]. Avail- able: https : / / www . edmundoptics . com / knowledge - center / application - notes / imaging / understanding - focal - length - and-field-of-view/ (visited on 06/06/2020). [33] Wikipedia contributors, Focal length — Wikipedia, the free encyclopedia, https://en.wikipedia.org/w/index.php?title=Focal_length& oldid=953092770, [Online; accessed 20-May-2020], 2020. REFERENCES 46

[34] “Arducam 8mp sony IMX219 camera module w/ CS lens 2718 (raspberry pi),” Library Catalog: www.robotshop.com. [Online]. Available: https:// www.robotshop.com/eu/en/arducam-8mp-sony-imx219-camera- module-cs-lens-2718-raspberry-pi.html (visited on 06/14/2020). [35] Arducam Documentation, Raspberry pi camera pinout, Library Catalog: www.arducam.com. [Online]. Available: https://www.arducam.com/ raspberry-pi-camera-pinout/ (visited on 06/04/2020). [36] “Background subtraction — OpenCV-python tutorials 1 documentation,” [Online]. Available: https : / / opencv - python - tutroals . readthedocs.io/en/latest/py_tutorials/py_video/py_bg_ subtraction/py_bg_subtraction.html (visited on 05/04/2020). [37] “OpenCV: Cv::BackgroundSubtractorMOG2 class reference,” [Online]. Available: https://docs.opencv.org/3.4/d7/d7b/classcv_1_ 1BackgroundSubtractorMOG2.html (visited on 05/05/2020). [38] “OpenCV: How to use background subtraction methods,” [Online]. Avail- able: https://docs . opencv . org/3.4/d1/dc5/tutorial _ background_subtraction.html (visited on 05/06/2020). [39] “Opencv: Image thresholding,” [Online]. Available: https://docs . opencv.org/master/d7/d4d/tutorial_py_thresholding.html (visited on 05/21/2020). [40] “OpenCV: Miscellaneous Image Transformations,” [Online]. Available: https://docs.opencv.org/master/d7/d1b/group__imgproc_ _misc . html # gae8a4a146d1ca78c626a53577199e9c57 (visited on 05/21/2020). [41] A. Bishnoi, “Noise removal with morphological operations opening and clos- ing using erosion and dilation,” International Journal Of Modern Engineering Research, vol. 4, 2014. [42] “Morphological transformations — opencv - python, tutorials-1 documen- tation,” [Online]. Available: https : / / opencv - python - tutroals . readthedocs . io / en / latest / py _ tutorials / py _ imgproc / py _ morphological _ ops / py _ morphological _ ops . html (visited on 05/21/2020). [43] “Image filtering — opencv 2.4.13.7 documentation,” [Online]. Available: https://docs . opencv . org/2.4/modules / imgproc / doc / filtering.html?highlight=dilate (visited on 05/21/2020). [44] “OpenCV: Contours : Getting started,” [Online]. Available: https://docs. opencv.org/trunk/d4/d73/tutorial_py_contours_begin.html (visited on 05/06/2020). REFERENCES 47

[45] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadar- rama, and T. Darrell, “Caffe: Convolutional architecture for fast feature em- bedding,” arXiv preprint arXiv:1408.5093, 2014. [46] “Caffe installation,” [Online]. Available: http : / / caffe . berkeleyvision.org/installation.html (visited on 05/12/2020). [47] “Opencv: Deep neural network module,” [Online]. Available: https:// docs . opencv . org / 3 . 4 / d6 / d0f / group _ _dnn . html (visited on 05/23/2020). [48] “Opencv: Cv::dnn::net class reference,” [Online]. Available: https : / / docs.opencv.org/3.4/db/d30/classcv_1_1dnn_1_1Net.html (visited on 05/23/2020). Appendices

48 Appendix A

A.1 Program Listing of FOD Detection and Classifica- tion

FOD Detection and Classification coding part is presented in this section. The required files for implementing this program are available at GitHub repository: https://github.com/mdsahmad39/FOD_Files. """Detection and Classification of FOD""" from imutils.video import FPS, VideoStream from _datetime import datetime from picamera import PiCamera import cv2 import imutils import numpy as np import os import time

MAIN_DIRECTORY = os.getcwd() print("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe(’MobileNetSSD_deploy.prototxt. txt’, ’ MobileNetSSD_deploy_10000_ahmad .caffemodel’) print("[INFO] model loaded...")

CLASSES =["background", "alligator", "car", "crane", "deer", "dog", "eagle", "fox", "hawk" , "heron", "horse", "person", "kangaroo", "monkey", "moose", "pelican", "raccoon ", "sheep", " vulture"]

COLORS = np.random.uniform(0, 255, size=( len(CLASSES), 3)) def object_detection(image):

49 50

""" Classifying the detected FOD using the trained Caffe model Parameters: image : string captured frame from the video The captured is first preprocessed and then passed through the loaded Caffe model for object classification """ (a, b) = image.shape[:2] blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300) ), 0.007843 , (300, 300), 127.5) # Pre-processing the # image so that it can be passed thorugh loaded Caffe model print("[INFO] computing object detections...") net.setInput(blob) # The processed blob is passed through Caffe model for FOD classification detections = net.forward() # loop over the detections for classifying multiple FOD for i in np.arange(0, detections.shape[2]): # extracting the confidence (i.e., probability) associated with the prediction results confidence = detections[0, 0, i, 2] # filtering out the weak detections by ensuring the " confidence" is greater than the minimum confidence if confidence > .60: # Extracting the index of the class label from the "detection results", idx =int(detections[0, 0, i, 1]) box = detections[0, 0, i, 3:7]*np.array([b, a, b, a]) (startX , startY , endX, endY) = box.astype("int") # Identifying the label of the detected FOD label = "{}: {:.2f}%". format(CLASSES[ idx], confidence * 100) print("[INFO] {}". format(label)) # Writing the prediction result along with the time in the text file so that it may be accessed as database file1 = open(f"{ MAIN_DIRECTORY}\\database.txt", ’ a+’) 51

file1.write(f’\nObject identified is { label} at { datetime.now()}’) file1.close() # This file is used by Flask server to update the web-interface file2 = open(f"{ MAIN_DIRECTORY}\\textfile.txt", ’ a+’) file2.write(f’\nObject is detected’) file2.close() # Creating bounding box around the classified FOD cv2.rectangle(image, (startX, startY), (endX, endY), COLORS[ idx],2) y = startY - 15 if startY - 15 > 15 else startY + 15 # Displaying the label of the classified FOD cv2.putText(image, label, (startX , y), cv2.FONT_HERSHEY_SIMPLEX , 0.5, COLORS [ idx], 2) return image

# Intializing the camera for streaming cap = VideoStream(usePiCamera=True).start()

# allow the camera to warmup time.sleep(2.0)

# FOD detection and classification from a video file # cap = cv2.VideoCapture("25_meters_person.mp4")

# Calculating the total frames present in the video file # length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) # print(length)

# Initializing the FPS rate fps = FPS().start()

# Creating a attribute for background subtraction using Gaussian distribution sub_image = cv2.createBackgroundSubtractorMOG2(history=100, varThreshold=90, detectShadows =False)

# Initializing the loop for FOD detection and classification while True: # Capture frames from the camera _, frame = cap.read() 52

# Applying Background Subtraction to the captured frame mask = sub_image.apply(frame)

# Thresholding the frame with the given constant value thresh = cv2.threshold(mask, 25, 255, cv2.THRESH_BINARY)[ 1]

# Dilation of the thresholded frame to fill in holes thresh = cv2.dilate(thresh, None, iterations=2)

# Finding the contours in the frame cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts)

# Loop over the contours for c in cnts: # If the contour is too smaller than 400, ignore it if cv2.contourArea(c) < 400: continue

# Computing the bounding box around the contour area (x,y,w,h)= cv2.boundingRect(c) cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) caution_text = "FOD is detected" print("[INFO] {}". format(caution_text))

# Writing the caution text and time in the text file so that it may be accessed as database file3 = open(f"{ MAIN_DIRECTORY}\\database.txt", ’a+’) file3.write(f’\n\n{ caution_text} at time { datetime. now()}’) file3.close() # This file is used by Flask server to update the web -interface file4 = open(f"{ MAIN_DIRECTORY}\\textfile.txt", ’a+’) file4.write(f’\nObject is detected’) file4.close()

# Displaying the frame after object detection if caution_text: # Applying the Caffe model for object classification image = object_detection(frame) cv2.imshow("image", image) # cv2.imshow("mask", mask) else: cv2.imshow("Frame", frame) # cv2.imshow("mask", mask) 53

# Calculating the FPS after the whole process and displaying fps.update() fps.stop() print("approx. FPS: {:.2f}". format(fps.fps()))

# if the ""esc"" key is pressed , break from the loop key = cv2.waitKey(30) if key == 27: break

# Cleaning the whole system after the program is stopped cap.stop() # cap.release() cv2.destroyAllWindows() Appendix B

B.1 Program Listing of Flask based Web Server

Coding part for Flask based web server, a micro-web framework, which works as a backbone to display different web pages with data related to the FOD detection system. """Initializing the server""" from flask import Flask, render_template , request import os current_dir = os.getcwd() file_path = f"{ current_dir}\\textfile.txt"

# Flask constructor app = Flask(__name__)

# Decorator for route function with different methods @app.route(’/’, methods=[’GET’, ’POST’]) def content(): """ :return: web-page after satisfying the condition """ if request.method == "POST": file = open(file_path , ’w’) file.close() return render_template(’index.html’) if os.path.getsize(file_path) == 0: return render_template(’index.html’) else: return render_template(’content.html’)

@app.route(’/database’, methods=[’GET’]) def data(): with open(’database.txt’, ’r’) as database: return render_template(’database.html’, text= database .read())

54 55

# method of Flask class which runs the application on the local development server. if __name__ == "__main__": app.run() 56

B.2 Program Listing of Web Pages displayed by Web Server

B.2.1 Program Listing of web page displaying normal message Coding part for index.html file and it shows the normal message from the system when there is no occurrence of FOD. < html> < head> < title> FOD System < meta http- equiv="refresh" content="1" > < meta charset="utf-8"> < meta name="viewport" content="width=device -width, initial -scale=1"> < link rel="stylesheet" href="https://mdbootstrap.com/ docs/jquery/css/ shadows/"> < link rel="stylesheet" href="https://maxcdn. bootstrapcdn.com/ bootstrap/4.3.1/css/ bootstrap.min.css"> < link rel="stylesheet" href="{{ url_for(’static’, filename=’css/main.css ’) }}"> < script src="https://ajax.googleapis.com/ajax/libs/ jquery/3.3.1/jquery. min.js"> < script src="https://cdnjs.cloudflare.com/ajax/libs/ popper.js/1.14.7/umd/ popper.min.js"> < script src="https://maxcdn.bootstrapcdn.com/ bootstrap/4.3.1/js/ bootstrap.min.js"> < style> h1 { text- align: center; font- size: 35px; color: black;} footer { position: absolute; bottom: 0; left: 0; right: 0; background- color: #e74011;} .bg { background- image: url(../ static/ img/ full_runway. jpg); background- position: cover; 57

background- size: 100%; width: 100%; height: 100%; background- size: cover; background- repeat: no- repeat; position: fixed; background- attachment: scroll; top: 0; right: 0; bottom: 0; left: 0; content: ""; z- index: 0; } < body> < div class="bg"> < nav class="navbar navbar-expand -sm p-3 mb-5 rounded " style="box-shadow: 4px 4px 10px; color: # e74011;"> < div class="container -fluid"> < div class="navbar-header">