applied sciences

Article Object Tracking with LiDAR: Monitoring Taxiing and

Zoltan Koppanyi * ID and Charles K. Toth

Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, 470 Hitchcock Hall, 2070 Neil Ave, Columbus, OH 43210, USA; [email protected] * Correspondence: [email protected]; Tel.: +1-614-804-9057

Received: 16 January 2018; Accepted: 1 February 2018; Published: 3 February 2018

Featured Application: Tracking aircraft during taxiing, landing and takeoffs at ; acquiring statistical data for centerline deviation analysis and veer-off modeling as well as for design in general.

Abstract: Mobile light detection and ranging (LiDAR) sensors used in car navigation and robotics, such as the Velodyne’s VLP-16 and HDL-32E, allow for sensing the surroundings of the platform with high temporal resolution to detect obstacles, tracking objects and support path planning. This study investigates the feasibility of using LiDAR sensors for tracking taxiing or landing aircraft close to the ground to improve airport safety. A prototype system was developed and installed at an airfield to capture point clouds to monitor aircraft operations. One of the challenges of accurate object tracking using the Velodyne sensors is the relatively small vertical field of view (30◦, 41.3◦) and angular resolution (1.33◦, 2◦), resulting in a small number of points of the tracked object. The point density decreases with the object–sensor distance, and is already sparse at a moderate range of 30–40 m. The paper introduces our model-based tracking algorithms, including volume minimization and cube trajectories, to address the optimal estimation of object motion and tracking based on sparse point clouds. Using a network of sensors, multiple tests were conducted at an airport to assess the performance of the demonstration system and the algorithms developed. The investigation was focused on monitoring small aircraft moving on runways and taxiways, and the results indicate less than 0.7 m/s and 17 cm velocity and positioning accuracy achieved, respectively. Overall, based on our findings, this technology is promising not only for aircraft monitoring but for airport applications.

Keywords: mobile LiDAR; object tracking; entropy minimization; object recognition; airport safety

1. Introduction Remote sensing technologies have long been used for mapping the static part of natural and man-made environments, including the shape of the Earth, transportation infrastructure, and buildings. As sensing technologies continue to advance, more powerful remote sensing systems have been developed [1], allowing us to observe and monitor the dynamic components of the environment. Light detection and ranging (LiDAR), as remote sensing technology, is already successfully applied to track pedestrian movements [2–4] and ground vehicle traffic from stationary [5] or on-board sensors [6,7]. Similar to ground vehicle or pedestrian traffic monitoring, airfield operations, such as taxiing, takeoffs and , can be observed and tracked using LiDAR sensors installed on the airfield. Safety at airports is a serious concern worldwide, since 30% of fatal accidents occurred during landing and and 10% during taxiing or other ground operations between 2007 and 2016 based on the Boeing Company’s annual report [8]. Aircrafts motion around airports are tracked by RADAR,

Appl. Sci. 2018, 8, 234; doi:10.3390/app8020234 www.mdpi.com/journal/applsci Appl. Sci. 2018, 8, 234 2 of 22 but airfield surveillance primarily relies on human observations from control towers. Clearly, active remote sensing in general is able to reduce the human error factor and operates in 24/7 in bad visibility conditions [9]. This feasibility study examines the achievable accuracy of a LiDAR-based monitoring system installed at an airfield to track and derive the trajectory of taxiing aircraft. At close range, LiDAR allows for directly capturing a more detailed and accurate 3D representation of taxiing or landing aircraft compared to existing radar or camera systems due to the higher spatial resolution. The 3D data enables accurately estimating driving patterns or trajectories, detecting aircraft types, monitoring aircraft operations in general, and can be applied to pilot education and training. Analysis of a large number of trajectories can be used for creating stochastic models of the airplane behavior during various operations, and this information can then be used for airport infrastructure and operation planning. Various researchers have already investigated the LiDAR technology for improving airport safety [9,10], and our earlier studies [11,12] were the first attempt to use LiDAR sensors to track aircraft ground movements. As the continuation of our previous work, in this study, we apply a more complex sensor network, rigorously address the sensor georeferencing, and for the first time, track airplanes during landings. The main contribution of the paper is the introduction of a novel algorithm to derive motion parameters and extract the trajectory of objects from sparse point clouds. Unlike terrestrial laser scanners that are able to produce large and detailed point clouds over a long scan time, mobile LiDAR sensors applied in autonomous vehicle or robotic applications capture sparse point clouds, but with high temporal resolution; typically, at 10–30 Hz. These sensors make complete 360◦ scans around the axis and capture data along multiple planes, producing high resolution in these planes. For instance, the Velodyne VLP-16 [13] and HDL-32E [14] LiDAR sensors, used in this study, have a 30◦ and 41.3◦ field of view, respectively, and the 16 and 32 planes represent an angular resolution of 1.33◦ and 2◦, which much coarser than the 0.1◦–0.3◦ resolution around the rotation axis. These sensor characteristics translate to only about 10–20 points per scan captured from a Cessna-sized airplane at a 40-m object–sensor distance. Note that the minimum distance to deploy sensors around runways and taxiways is regulated by the airport authorities for safety reasons. Clearly, the challenge is to track aircraft from these low-resolution LiDAR scans. The algorithm addressing this issue and presented in this paper could be of interest to autonomous vehicle applications or robotics to track objects that are farther away from the LiDAR sensor. The structure of the paper is as follows. In the next “Background” subsection, the classification and brief description of the existing tracking algorithms, such as center of gravity (COG) and iterative closest point (ICP) approaches, are presented briefly. Then, Section2 presents the hardware components of the demonstration system and the procedure for sensor georeferencing. This section also introduces the proposed algorithms to estimate the parameters of the underlying motion model from point clouds of low point density, including the details of the algorithms and the implementation challenges. In the Section3, two tests are presented using the demonstration system installed at an airfield. During the two tests, data from taxiing and landing aircraft were captured. Section4 presents the results of the two tests followed by Section5 that provides the performance analysis, the comparison of the algorithms as well as the qualitative and quantitative error assessments. The papers ends with a conclusion and appendices. The terminology and most common definitions used throughout the paper can be found in AppendixA.

Background LiDAR as well as other sensors used for tracking objects can be deployed on mobile or static platforms. The first case is the typical setup for mobile mapping, autonomous vehicles, and robotics. This is a challenging case, because the platform is moving, as is the tracked object, and consequently, the sensor has to be accurately georeferenced at all time. In contrast, the static setup is a simpler case, since the sensor georeferencing has to be done only once. This arrangement is commonly used in surveillance or transportation applications for traffic monitoring. In this study, the static sensor configuration is applied. Appl. Sci. 2018, 8, 234 3 of 22

Several algorithms exist for tracking objects, such as pedestrians or vehicles, in 3D LiDAR data, and can be divided into model-free and model-based approaches according to Wang et al. [15]. The model-free approaches do not assume any properties, such as the shape of the object or motion model, which the tracked object is following. In contrast, model-based approaches use prior information regarding the object to achieve more reliable tracking. Obviously, model-free algorithms are more general, but model-based methods perform better. Methods introduced in this study are model-based and consider the type of the object assuming certain types of motion models. The general workflow of the object tracking in point clouds includes object segmentation, recognition and tracking. The dynamic elements of each scan are separated from the static background during the segmentation step, then the object is recognized in the model-based approach [2,3]. Afterwards, the trajectory or motion parameters extracted from consecutive scans, and thus, the captured points from the moving objects, are known epoch-by-epoch. One can easily estimate the trajectory by calculating, for example, the center of gravity (COG) of the segmented points from each scan. Other solutions are based on point cloud registration algorithms. The iterative closest point (ICP) method [16] can be used to estimate, for instance, the six degrees of freedom (rigid body) transformation matrix between the consecutive and segmented scans (see for instance [17]). The transformation matrices contain the rotation and translation parameters that can be used for computing the trajectory. In another approach, instead of using the whole segmented scan, the common feature points or points sets can be extracted from the scans, and the transformation matrix is estimated based on these correspondences [18]. The tracking can be improved with considering the previous scans using Bayesian or Markov approaches [18,19], or the final trajectory can be refined with Kalman filtering or other smoothing techniques.

2. Prototype System and Methods

2.1. System Overview The prototype sensor network consists of sensor stations, deployed along the or taxiway to be monitored. Sensor stations have Velodyne’s LiDAR sensors and a GPS antenna, rigidly attached to the column of a standard airport light fixture. Besides georeferencing, GPS provides accurate timing for the LiDAR sensors using the 1PPS (pulse per second) and NMEA (National Marine Electronic Association) GPS signals. In addition, each sensor station has a data logging component, deployed farther away from the runway or taxiway for safety reasons. The adjustable sensor configuration allows for various sensor alignments. For example, Figure1a,b show horizontally and vertically aligned VLP-16 sensors, respectively, where Z is the rotation axis and κ is the horizontal rotation angle or azimuth ranging from 0◦ to 360◦ with 0.1–0.3◦ angular resolution (∆κ). The vertical and horizontal alignments mean that the sensor’s rotation plane is either perpendicular or parallel to the horizontal plane, respectively. The narrower field of view (δ) is 30◦ and 41.3◦ with 1.33◦ and 2◦ angular resolution (∆δ) for the VLP-16 and HDL-32E systems, respectively. In an installation shown in Figure1c, the left side shows two vertically aligned VLP-16s and one horizontally aligned HDL-32E scanner, while the right side shows two vertically aligned VLP-16 scanners. The alignment of the sensors was optimized to cover the largest field of view to capture the maximum number of points from the tracked object for a given number of scanners. A simulation study on this subject was conducted using Blender (see [12,20]). Since the Velodyne’s VLP-16 sensors have a smaller vertical field of view with coarser angular resolution, therefore these sensors are aligned vertically. While with this arrangement, the horizontal coverage of the runway is shorter, yet more points can be captured from the aircraft body. HDL-32E LiDAR is installed horizontally, due to its better vertical angular resolution and larger field of view. The minimum distance at the airport where our tests were conducted was specified as 20 m and 30 m next to taxiways and runways, respectively. Appl. Sci. 2018, 8, 234 4 of 22

The two sensor stations 40 m apart from each other provides an effective coverage of the runway of about 70 m (see Figure1d). Appl. Sci. 2018, 8, x FOR PEER REVIEW 4 of 22

(a) (b)

(c)

(d)

Figure 1. Sensor system overview: (a) horizontally and (b) vertically aligned VLP‐16 sensor, (b) Figurecomponents 1. Sensor of the system demonstration overview: sensor (a network,) horizontally (c) an installation and (b) vertically at the airfield aligned utilizing VLP-16 including sensor, (c)the components sensors’ fields of the of demonstrationview three VLP‐ sensor16 scanners network, and one and HDL (d) sensor‐32E, and coverage. (d) sensor coverage.

2.2. Sensor Georeferencing Appl. Sci. 2018, 8, 234 5 of 22

2.2. Sensor Georeferencing In a sensor network, point clouds acquired by the individual sensors have to be transformed into a common reference frame that could be local or global. When rigidly connected sensors Appl. Sci. 2018, 8, x FOR PEER REVIEW 5 of 22 are used together, then the coordinate system of one of the sensors is considered as a local frame, and the relativeIn a sensor translation network, and point rotation clouds acquired of the other by the sensorsindividual are sensors estimated. have to be This transformed procedure into is called point clouda common co-registration. reference frame There that arecould many be local co-registration or global. When approaches rigidly connected and most sensors of are these used methods requiretogether, a measurable then the overlap coordinate between system point of one clouds of the sensors that may is considered not be feasible as a local in theframe, proposed and the system relative translation and rotation of the other sensors are estimated. This procedure is called point configurationcloud co‐ here.registration. Therefore, There are in many contrast co‐registration to co-registration, approaches and the most coordinates of these methods and rotations require of the sensorsa are measurable independently overlap estimatedbetween point in aclouds global that system. may not This be procedurefeasible in the is generally proposed system referred to as sensor georeferencing.configuration here. Therefore, in contrast to co‐registration, the coordinates and rotations of the Sensorsensors georeferencing are independently includes estimated the in determination a global system. of This the procedure position is and generally orientation referred in to a as reference frame. Thesensor coordinates georeferencing. of the LiDAR sensor location are measured with conventional land surveying Sensor georeferencing includes the determination of the position and orientation in a reference technique using total station. The sensor orientation is usually indirectly estimated by using object frame. The coordinates of the LiDAR sensor location are measured with conventional land surveying space targets.technique In using our casetotal station. for estimating The sensor the orientation three attitude is usually angles, indirectly a 1.8 estimated m x 1.2 by m using calibration object board is usedspace as a target targets. (see In our Figure case for2). estimating This smooth the three and attitude flat calibration angles, a 1.8 board m x 1.2 is largem calibration enough board to provide is a sufficientused number as a target of points(see Figure obtained 2). This from smooth it. and Prisms flat calibration attached board to the is four large corners enough to of provide the board a were used tosufficient measure number their globalof points coordinates obtained from with it. Prisms a total attached station. to the The four board corners was of the observed board were at various locationsused and to orientationsmeasure their in global the fieldcoordinates of view with of thea total sensors station. (see The Figure board 2was). observed at various locations and orientations in the field of view of the sensors (see Figure 2).

Figure 2. Sensor georeferencing: Calibration board with surveying prism (upper left corner) measured Figure 2.withSensor total station georeferencing: and sensor station. calibration board with surveying prism (upper left corner) measured with total station and sensor station. The boresight estimation is based on comparing the total station measured target locations and orientations to the same parameters, estimated from the point clouds of the targets. Obviously, there Theis boresightno chance estimationthat the prism is basedlocations on are comparing measured theby totalthe point station cloud, measured but fitting target a plane locations the and orientationsorientation to the of same the board parameters, surface can estimated be estimated. from Consequently, the point clouds the norm of vectors the targets. of the Obviously,board in the there is no chancesensor that and the global prism system locations can be are calculated measured from by the the LiDAR point points cloud, and but the fitting corner a planemeasurements, the orientation of the boardrespectively. surface Using can the be estimated.two results, finding Consequently, the unknown the normorientation vectors parameters of the boardcan be formulated in the sensor and global systemas a least can squares be calculated estimation problem: from the LiDAR points and the corner measurements, respectively. Using the two results, finding the unknown orientation parameters can be formulated as a least squares minimize w. r. t , , , , , (1) estimation problem: n 2 ∈ S G ∈ where is theminimize norm vector(w.r.t of theα ,boardβ, γ) in the Rglobal(α, β system, γ)ni −at nthei th, position, is (1) ∑ 2 the norm vector of the board in the sensor coordinatei=1 system at the th position, is the number of boardG positions,3 and , , ∈ 3 is the rotation matrix as the function of the , , ∈ 0, 2 S 3 where ni ∈ R is the norm vector of the board in the global system at the ith position, ni ∈ R is the norm vector of the board in the sensor coordinate system at the ith position, n is the Appl. Sci. 2018, 8, 234 6 of 22 number of board positions, and R(α, β, γ) ∈ SO(3) is the rotation matrix as the function of the α, βAppl., γ Sci.∈ 2018[0,, 2 8π, x] FORorientation PEER REVIEW parameters; for detailed notations and definitions see Appendix6 of 22A . The Levenberg–Marquardt method is used to solve Equation (1). The positions and orientations of the orientation parameters; for detailed notations and definitions see Appendix A. The Levenberg– sensorsMarquardt along method with the is used time to synchronization solve Equation allow(1). The for positions transforming and orientations all points of captured the sensors at different along epochswith the to the time same synchronization global coordinate allow system. for transforming all points captured at different epochs to the same global coordinate system. 2.3. Aircraft Segmentation from LiDAR Scans 2.3.The Aircraft first Segmentation step of object from tracking LiDAR is Scans the separation of the dynamic and static content of the scene, and thenThe identification first step of object of the tracking dynamic is objects. the separation A reference of the or dynamic baseline and scan static of the content background of the scene, can be obtainedand then from identification scans that of include the dynamic no aircraft, objects. and A then,reference an occupancy or baseline gridscan isof created the background from the can average be backgroundobtained from scan. scans During that tracking,include no this aircraft, grid isand compared then, an occupancy to an incoming grid is scan, created and from thus, the the average dynamic contentbackground could bescan. easily During filtered. tracking, The this performance grid is compared is further to an improved incoming by scan, defining and thus, a region the dynamic of interest fromcontent the scans, could suchbe easily as the filtered. runway The or performance taxiway in view.is further improved by defining a region of interest fromIn ourthe tests,scans, only such one as the airplane runway presents or taxiway at any in epoch, view. and therefore, this segmentation is a relatively easy task.In our It istests, noteworthy only one that airplane almost presents always at just any one epoch, plane and is therefore, in the network’s this segmentation field of view is a for landingrelatively scenarios easy task. due It to is mandatory noteworthy aircraft that almost separation. always Obviously, just one plane there is can in the be more network’s than onefield plane of inview the case for landing of taxiing. scenarios In the due latter to mandatory case, the airplanes aircraft separation. can be segmented Obviously, from there each can other be more by simplythan trackingone plane their in motion the case in of the taxiing. region In of the interest. latter case, At the the endairplanes of the can segmentation be segmented process, from each 3D coordinates other by ofsimply the aircraft tracking body their points motion with in their the timestampsregion of interest. are available At the forend each of the scan. segmentation process, 3D coordinatesThe density of the of aircraft the captured body points scans with depends their timestamps on the object–sensor are available distance.for each scan. Figure 3 shows The density of the captured scans depends on the object–sensor distance. Figure 3 shows segmented data captured by five LiDAR sensors at 23 m and 33 m object–sensor distance. Various colors segmented data captured by five LiDAR sensors at 23 m and 33 m object–sensor distance. Various indicate the times, and thus, the point cloud in the figures shows all scans of the plane while crossing colors indicate the times, and thus, the point cloud in the figures shows all scans of the plane while the sensor network’s field of view. Clearly, the shape of the airplane is barely recognizable in Figure3a, crossing the sensor network’s field of view. Clearly, the shape of the airplane is barely recognizable and not recognizable in Figure3b. These scans contain a very small number of points of the aircraft in Figure 3a, and not recognizable in Figure 3b. These scans contain a very small number of points of bodythe dueaircraft to the body large due object–sensor to the large object–sensor distance, and distance, thus, dedicated and thus, algorithms dedicated arealgorithms needed are to extract needed the motionto extract information the motion or information trajectories. or trajectories.

(a)

(b)

Figure 3. Top‐down views from an airplane captured at different times by the sensor network. The Figure 3. Top-down views from an airplane captured at different times by the sensor network. The two two subfigures illustrate the point densities at different object–sensor distances: color differentiates subfiguresscans: (a) illustrate at 23 m object–sensor the point densities distance, at the different average object–sensor number of points distances: per scan color is 130 differentiates (26 per sensor), scans: (a)and at 23 (b m) at object–sensor 33 m object–sensor distance, distance, the average the average number number of points of points per scan per isscan 130 is (26 20 per (4 points sensor), from and each (b) at 33sensor). m object–sensor distance, the average number of points per scan is 20 (4 points from each sensor).

2.4. Motion Models Appl. Sci. 2018, 8, 234 7 of 22

2.4. Motion Models This study uses a model-based object tracking approach to address the issue of small number of captured points. Three types of motion models, namely, constant velocity (CV), constant acceleration (CA), and polynomial motion model, are used to describe the dynamics of taxiing or landing airplanes. The models for one coordinate axis are shown in Table1; details of these models can be found in AppendixB. For all three models, the following two assumptions are made. First, the tracked plane follows a rigid body motion, and second, the heading of the aircraft body points towards the motion direction. The second condition is approximate for landing aircraft, as side wind may introduce an error in the heading and aircraft attitude. These assumptions mean that all points from the aircraft have the same motion parameters. The faster the aircraft moves the better the assumptions hold. We also assumes 2D motion for taxiing and 3D motion for landing. Note that, in the simplest case, the trajectory of the CV model is a straight line, and thus, the limitation of this model is that it cannot be used for tracking turns, accelerations or decelerations. However, this motion is still valid within a short time window. The CA model is able to describe more complex motion, namely curvilinear motion, but it is still inadequate to model sharp turns. The general or polynomial model allows for describing arbitrary trajectories besides the assumptions presented above.

Table 1. Motion models used in this study; more details can be found in AppendixB.

Constant Velocity Constant Acceleration General or Polynomial n t s = s + a ti s = s0 + vst s = s0 + vst + as 2 0 ∑ s i=1

2.5. Object Space Reconstruction with Various Motion Models If the motion model is chosen and the parameters of the model, such as velocity, acceleration or the coefficients of the polynomial model are known, then all points of the aircraft body taken at different epochs can be transformed back to a common frame at zero time, resulting in a reconstructed point cloud—in other words, in the reconstruction of the aircraft body (see the end of AppendixB). Thus, the motion model with known parameters can be interpreted as a time-varying transformation. Figure4 shows an example of the reconstruction assuming CV motion at 4.5, 6.5, 8.5 and 10.5 m/s velocities. Note that as the velocity parameter in the X direction approaches the real value, the reconstructed point cloud becomes sharper and more consistent, and finally, the aircraft body is clearly recognizable at 10.5 m/s. The idea behind the proposed approach of finding the motion parameters is to measure the consistency, or ‘order’, of the reconstruction. Obviously, if the motion parameters are known, the trajectory can be simply calculated using the model.

2.6. Estimating the Motion Model with Volume Minimization The proposed algorithm, called volume minimization (VM), adjusts the motion parameters to find the optimal reconstruction that occupies the minimum volume of the reconstructed point cloud of the aircraft. In this way, the volume of the reconstruction is used as a metric for measuring the point cloud consistency:   minimize (w.r.t x) Vol Upt∈PT(pt, t, x) , (2) P n where pt = [x, y, z, t] ∈ P is a point from P ∈ C = 2 point cloud at t ∈ R time, T ∈ (P, R, R ) → P depicts the motion model that transforms a point to epoch t based on the underlying motion model with given x ∈ Rn motion parameters, and finally, Vol ∈ C → N is a function that measures the volume of a point cloud. During volume minimization, the space is decimated into cubes, and the volume is measured by the number of cubes occupied by the reconstructed point cloud, as shown in Figure5. Figure4 shows the cubes from top view, and clearly illustrates that as the velocity approaches the real value, Appl. Sci. 2018, 8, 234 8 of 22 and the reconstruction becomes more realistic, the cube numbers also decrease, demonstrating that the Appl. Sci. 2018, 8, x FOR PEER REVIEW 8 of 22 proposed metric is consistent with the reconstruction quality. It is noteworthy that instead of volume, onevolume, can use one entropy can use similar entropy to [similar21,22]. to For [21,22]. our dataset, For our wedataset, found we that found the that volume the volume as metric as requiresmetric lessrequires computation less computation and has better and has convergence better convergence properties properties as opposed as opposed to entropy. to entropy.

Figure 4. The idea of volume minimization: figures show reconstructions (with red dots) assuming a Figure 4. The idea of volume minimization: figures show reconstructions (with red dots) assuming a constant velocity (CV) model with different velocities; green points are the raw data, black rectangles constant velocity (CV) model with different velocities; green points are the raw data, black rectangles show the occupied cubes, and titles show velocities and number of cubes obtained. show the occupied cubes, and titles show velocities and number of cubes obtained. An objective function of Equation (2) for the CV model is presented in Figure 6 in order to analyzeAn objective the problem function space. of Note Equation that the (2) figure for the shows CV the model function is presented at different in scales. Figure Although6 in order the to analyzeobjective the function problem at space. large Notescale, thatshown the in figure Figure shows 6a, seems the function quasi‐convex, at different the medium scales. Althoughscale in theFigure6b, objective reveals function that at this large function scale, shownis highly in non Figure‐convex.6a, seems The complexity quasi-convex, of the the objective medium function scale in Figuredepends6b, reveals on the thatcube this size, function the number is highly of points non-convex. captured from The complexitythe aircraft, ofand the the objective applied functionmotion dependsmodel. on Obviously, the cube the size, objective the number function of points is non captured‐smooth due from to the the aircraft, volume and calculation the applied presented motion model.above. Obviously, This can be the clearly objective seen at function small scale is non-smoothin Figure 6c, which due to shows the volume that the calculation objective function presented is above.a multivariate This can be step clearly function. seen Consequently, at small scale one in Figure possible6c, solution which shows can be that to approximate the objective the function objective is a multivariatefunction with step a smooth function. function Consequently, to solve the one problem possible with solution one of the can smooth be to approximate optimization thetechniques, objective functionwhich withsolution a smooth is very function likely to to be solve sub‐optimal. the problem Alternatively, with one of non the‐smooth smooth optimization optimization techniques techniques, might be applicable. which solution is very likely to be sub-optimal. Alternatively, non-smooth optimization techniques In this study, we implemented a simulated annealing (SA) for solving Equation (2) [23]. The might be applicable. input of the algorithm is the initial search boundaries for all motion parameters and the cube size. In this study, we implemented a simulated annealing (SA) for solving Equation (2) [23]. The input The algorithm is relatively robust against the searching space due to the favorable characteristics of of the algorithm is the initial search boundaries for all motion parameters and the cube size. the objective function at large scale. This allows for finding a value close to the optimum in a couple Theof algorithm iterations, iseven relatively if the search robust space against is large. the The searching cube size, space however, due to has the to favorable be chosen characteristics carefully. If of thethe objectivesize is too function large, atthen large it scale.is not This adequate allows forto describe finding a the value volume close toof the the optimum aircraft, inand a couple the of iterations,reconstruction even will if the be search“blurry”. space If it is is large. too small, The then cube not size, enough however, points has will to be be chosen located carefully. in the cubes. If the sizeIn is other too large,words, then one itcan is easily not adequate imagine toa situation, describe when the volume the cube of size the is aircraft, so small and that the each reconstruction point only willoccupies be “blurry”. one cube If it isat tooalmost small, all thenchosen not motion enough parameter points will values, be located which in clearly the cubes. results In in other the same words, oneobjective can easily (cost) imagine values. a situation,In addition, when the thelarge cube scale size characteristics is so small that of the each objective point only function occupies highly one cubedepends at almost on the all cube chosen size, motion and this parameter property values,can be lost which if the clearly cube size results is too in small. the same For objectivea Cessna size (cost) values.airplane, In addition, a 0.2–1 m the cube large sizes scale performed characteristics well in ofour the dataset. objective function highly depends on the cube Appl. Sci. 2018, 8, 234 9 of 22 size, and this property can be lost if the cube size is too small. For a Cessna size airplane, a 0.2–1 m cube sizes performed well in our dataset. Appl. Sci. 2018, 8, x FOR PEER REVIEW 9 of 22 Appl. Sci. 2018, 8, x FOR PEER REVIEW 9 of 22

Figure 5. Measuring the volume of the reconstruction: the space is decimated with cubes, and the FigureFigure 5. Measuring 5. Measuring the the volume volume of of the the reconstruction:reconstruction: the the space space is decimated is decimated with cubes, with cubes,and the and the volume is represented by the number of cubes occupied by the reconstruction. volumevolume is represented is represented by theby the number number of of cubes cubes occupied by by the the reconstruction. reconstruction.

(a) (b) (c) (a) (b) (c) Figure 6. Illustration of the problem space for CV motion model at various scales: (a) large, (b) Figure 6. Illustration of the problem space for CV motion model at various scales: (a) large, (b) medium, and (c)small scale. Figure 6. Illustration of the problem spacemedium, for CV and motion (c)small model scale. at various scales: (a) large, (b) medium, and (c) small scale. In the followings, the details of the SA parameterization are presented; the pseudocode of the In the followings, the details of the SA parameterization are presented; the pseudocode of the algorithm can be found in Appendix C. A geometric cooling function is applied with 0.92 base. algorithm can be found in Appendix C. A geometric cooling function2 is applied with 0.92 base. The initial searching boundaries ( ) are 100 m/s and 100 m/s for velocities and accelerations, In the followings, the details of the SA parameterization are2 presented; the pseudocode of the The initial searching boundaries ( ) are 100 m/s and 100 m/s for velocities and accelerations, algorithmrespectively. can be found The search in Appendix boundariesC. ( A) geometric decrease with cooling the cooling function function is applied and 50–500 with randomlyc = 0.92 base. respectively. The search boundaries () decrease with the cooling function and 50–500 randomly picked neighboring values within the bounds are evaluated at 2each iteration. The acceptance The initialpicked searching neighboring boundaries values within (r0 )the are bounds 100 m/s are andevaluated 100 m/sat eachfor iteration. velocities The and acceptance accelerations, probability has an exponential distribution over the absolute difference between the candidate and respectively.probability The has search an exponential boundaries distribution (r ) decrease over with the absolute the cooling difference function between and 50–500the candidate randomly and picked current best solutions, i.e., ′ i. The stopping criterion is satisfied when the search bounds are current best solutions, i.e., ′ . The stopping criterion is satisfied when the search bounds are neighboringsmaller values than 0.01 within for all the parameters. bounds The are overall evaluated running at time each depends iteration. on the The cube acceptance size, the applied probability smaller than 0.01 for all parameters. The overall running time depends on the cube size, the applied has an exponentialmotion model, distribution and the point overcloud the size. absolute This algorithm difference typically between terminates the in candidate less than 100 and iterations; current best motion model, and the point cloud size. This algorithm typically terminates in less than 100 iterations; solutions,note i.e., that|e 0SA− doese|. The not stopping guarantee criterionoptimality. is This satisfied means when a roughly the search1–3‐s runtime bounds on arean Intel smaller Core than i7 0.01 note that SA does not guarantee optimality. This means a roughly 1–3‐s runtime on an Intel Core i7 4.4GHz CPU. Since the current implementation is not optimized, a less than 1‐s execution time is very for all4.4GHz parameters. CPU. Since The the overall current running implementation time depends is not optimized, on the a cube less than size, 1‐s the execution applied time motion is very model, likely to be achievable with code optimization and faster CPU, allowing for real‐time applications. and thelikely point to be cloud achievable size. Thiswith code algorithm optimization typically and terminatesfaster CPU, allowing in less thanfor real 100‐time iterations; applications. note that SA The results provided by the SA algorithm can be assessed visually based on the consistency of does not guaranteeThe results optimality. provided by Thisthe SA means algorithm a roughly can be assessed 1–3-s runtime visually on based an Intelon the Core consistency i7 4.4GHz of CPU. the reconstruction. For instance, Figure 5 shows a reconstruction, where the edges of the the reconstruction. For instance, Figure 5 shows a reconstruction, where the edges of the Since thereconstructed current implementation point cloud are issharp not and optimized, the major a lessparts than of the 1-s airplane execution are clearly time isrecognizable, very likely to be reconstructed point cloud are sharp and the major parts of the airplane are clearly recognizable, achievableindicating with code that the optimization parameter estimation and faster performed CPU, allowing well. for real-time applications. For the source indicating that the parameter estimation performed well. code, see Supplementary Materials. 2.7. Refinement of the Volume Minimization: Motion Estimation with Cube Trajectories The2.7. results Refinement provided of the Volume by the Minimization: SA algorithm Motion can Estimation be assessed with visuallyCube Trajectories based on the consistency of the reconstruction.The presented For instance, volume Figure minimization5 shows algorithm a reconstruction, preforms well where for constant the edges velocity of the and reconstructed constant The presented volume minimization algorithm preforms well for constant velocity and constant acceleration motion models. Unfortunately, it is not stable for higher‐order trajectories, supposedly, point cloudacceleration are sharp motion and models. the major Unfortunately, parts of it the is not airplane stable for are higher clearly‐order recognizable, trajectories, indicatingsupposedly, that the because the objective function is more complex. In this section, we present a solution, called cube parameterbecause estimation the objective performed function well.is more complex. In this section, we present a solution, called cube trajectory estimation (CT) that is used for refining the trajectory provided by the volume trajectory estimation (CT) that is used for refining the trajectory provided by the volume minimization algorithm to solve for higher‐order general motion models. 2.7. Refinementminimization of the algorithm Volume to Minimization: solve for higher Motion‐order general Estimation motion with models. Cube Trajectories The presented volume minimization algorithm preforms well for constant velocity and constant acceleration motion models. Unfortunately, it is not stable for higher-order trajectories, supposedly, Appl. Sci. 2018, 8, x FOR PEER REVIEW 10 of 22 Appl. Sci.Appl. 2018Sci. 2018, 8, 234, 8, x FOR PEER REVIEW 10 of 2210 of 22 After obtaining the initial reconstruction with assuming CV or CA motion models and using After obtaining the initial reconstruction with assuming CV or CA motion models and using volume minimization, the proposed CT algorithm decimates this reconstruction into cubes similar to volume minimization, the proposed CT algorithm decimates this reconstruction into cubes similar to becausethe thespace objective decimation function presented is more for the complex. volume calculation In this section, in the we previous present section a solution, (see Figure called 7). cube the space decimation presented for the volume calculation in the previous section (see Figure 7). trajectoryThose estimation points that (CT) are located that is in used the same for refining cube can the be trajectory assumed to provided be captured by thefrom volume the same minimization spot of Those points that are located in the same cube can be assumed to be captured from the same spot of algorithmthe aircraft to solve body for but higher-order at different times. general Thus, motion these models.points represent correspondences. The X, Y, and the aircraft body but at different times. Thus, these points represent correspondences. The X, Y, and AfterZ coordinate obtaining components the initial of the reconstruction cube trajectories with can be assuming calculated CV for oreach CA cube motion based modelson these points; and using Z coordinate components of the cube trajectories can be calculated for each cube based on these points; these trajectories are shown in black lines in Figure 7. Next, the velocity of each cube is derived based volumethese minimization, trajectories are theshown proposed in black CTlines algorithm in Figure 7. decimates Next, the velocity this reconstruction of each cube is intoderived cubes based similar on the time differences of their points. This allows the visualization of these velocities on a time– to theon space the time decimation differences presented of their points. for the This volume allows calculation the visualization in the of previous these velocities section on (see a time– Figure7). velocity diagram (see Figure 8). Assuming that all cubes, and thus all points of the aircraft, have the Thosevelocity points diagram that are (see located Figure in 8). the Assuming same cube that can all cubes, be assumed and thus to all be points captured of the from aircraft, the have same the spot of same motion, a chosen general motion model can be fitted to the velocity instances with regression the aircraftsame motion, body buta chosen at different general times. motion Thus, model these can be points fitted represent to the velocity correspondences. instances with Theregression X, Y, and Z (see Figure 8). The residuals from the regression provide a feedback and used later to assess the errors. (see Figure 8). The residuals from the regression provide a feedback and used later to assess the errors. coordinateNote components that the correspondences of the cube trajectories rely on the can initial be calculatedreconstruction. for eachIf the cubeinitial based point cloud on these is not points; Note that the correspondences rely on the initial reconstruction. If the initial point cloud is not theseclose trajectories to the real are solution, shown inthen black the correspondences lines in Figure7 .mightNext, not the be velocity correct. ofThus, each executing cube is derived the steps based close to the real solution, then the correspondences might not be correct. Thus, executing the steps on thepresented time differences above results of their in a points. reconstruction This allows where the the visualization correspondences of these are velocitiesdifferent than on a they time–velocity were presented above results in a reconstruction where the correspondences are different than they were diagramin the (see initial Figure reconstruction,8). Assuming requiring that all cubes, the repetition and thus of all the points entire of thetrajectory aircraft, estimation have the sameuntil the motion, in the initial reconstruction, requiring the repetition of the entire trajectory estimation until the a chosencorrespondences general motion do not model change. can Since be the fitted refinement to the velocity requires instancesonly a simple with regression regression computation, (see Figure 8). correspondences do not change. Since the refinement requires only a simple regression computation, the runtime with an optimized code is one or two orders of magnitude less than it was for the volume The residualsthe runtime from with the an regression optimized providecode is one a feedback or two orders and of used magnitude later to less assess than the it was errors. for the volume minimization algorithm. minimization algorithm.

Figure 7. Illustration of the cube trajectory (CT) algorithm: the initial reconstruction (red dots) is FigureFigure 7. Illustration 7. Illustration of of the the cube cube trajectory trajectory (CT)(CT) algorithm: algorithm: the the initial initial reconstruction reconstruction (red (reddots) dots)is is calculated from the data (blue dots), then this reconstruction is decimated into cubes, and then the calculatedcalculated from from the the data data (blue (blue dots), dots), then then thisthis reconstruction is isdecimated decimated into into cubes, cubes, and andthen thenthe the trajectories (black lines) of all cubes are determined based on the points in the cubes. trajectoriestrajectories (black (black lines) lines) of of all all cubes cubes are are determined determined based based on on the the points points in inthe the cubes. cubes.

Figure 8. Time–velocity graph of cubes from curvy linear motion: the cube velocities along the X and Figure 8. Time–velocity graph of cubes from curvy linear motion: the cube velocities along the X and Y directions are depicted with blue and red dots, respectively; in this example, a constant acceleration FigureY directions 8. Time–velocity are depicted graph with of blue cubes and from red dots, curvy respectively; linear motion: in this the example, cube velocities a constant along acceleration the X and Y (CA) motion model is used, thus the parameters a line are estimated with regression. directions(CA) motion are depicted model is with used, blue thus and the parameters red dots, respectively; a line are estimated in this with example, regression. a constant acceleration (CA)3. Field motion Tests model is used, thus the parameters a line are estimated with regression. 3. Field Tests Note that the correspondences rely on the initial reconstruction. If the initial point cloud is not close to the real solution, then the correspondences might not be correct. Thus, executing the steps presented above results in a reconstruction where the correspondences are different than they were in the initial reconstruction, requiring the repetition of the entire trajectory estimation until the Appl. Sci. 2018, 8, 234 11 of 22 correspondences do not change. Since the refinement requires only a simple regression computation, the runtime with an optimized code is one or two orders of magnitude less than it was for the volume minimization algorithm.

3. Field Tests The tests were conducted at the Ohio State University (OSU) Don Scott Airport (IATA: OSU, ICAO: KOSU) in Columbus, OH, USA. The airport is used for government and private purposes as well as provides education services. The airport is capable of accepting small-size single/twin engine airplanes,Appl. helicopters Sci. 2018, 8, x FOR and PEER corporate REVIEW jets. Prior to the data acquisition, a local surveying11 of 22 network with three control points was established to support the surveying of the sensor stations, pavement The tests were conducted at the Ohio State University (OSU) Don Scott Airport (IATA: OSU, markingsICAO: on theKOSU) runway, in Columbus, and for OH, the USA. georeferencing The airport is (see used Section for government 2.2). The and local private network purposes is tied as to the UTM (Universalwell as provides Transverse education Mercator) services. The projection airport is with capable using of accepting the positions small‐size determined single/twin atengine the control points fromairplanes, GPS helicopters observations. and corporate All positions jets. Prior and to the orientations data acquisition, are derived a local surveying with respect network to with this global coordinatethree system. control points was established to support the surveying of the sensor stations, pavement Twomarkings tests were on the conducted runway, and to for demonstrate the georeferencing and evaluate (see Section the 2.2). performance The local network of the is proposed tied to the system. UTM (Universal Transverse Mercator) projection with using the positions determined at the control These tests are summarized in Table2. The goal of the first test was to assess the performance of the points from GPS observations. All positions and orientations are derived with respect to this global systemcoordinate for tracking system. taxiing aircraft, i.e., modeling 2D motion. The prototype system was deployed next to oneTwo of the tests taxiways were conducted at the OSUto demonstrate airport. and The evaluate sensor the network performance consisted of the proposed of two sensor system. stations. The firstThese station tests had are twosummarized vertically in Table aligned 2. The VLP-16 goal of sensors, the first test and was the to other assess station the performance was equipped of the with a verticallysystem aligned for tracking VLP-16 taxiing and a aircraft, horizontally i.e., modeling aligned 2D HDL-32E motion. The scanners. prototype Figure system9 wasa shows deployed the sensor configurationnext to one and of the the fieldtaxiways of views. at the OSU The airport. distance The betweensensor network the two consisted sensor of stationstwo sensor was stations. about 40 m, The first station had two vertically aligned VLP‐16 sensors, and the other station was equipped with and thea stations vertically were aligned deployed VLP‐16 and 20 a m horizontally from the aligned centerline. HDL‐ This32E scanners. sensor Figure configuration 9a shows the allows sensor for 70 m effectiveconfiguration runway coverage, and the field measured of views. on The the distance centerline. between During the two thesensor three-hour stations was data about acquisition, 40 m, 18 taxiing Cessna-typeand the stations and were corporate deployed jet 20 airplanes m from the were centerline. tracked; This the sensor velocity configuration of the taxing allows airplanes for 70 m varied from 8–12effective m/s. runway coverage, measured on the centerline. During the three‐hour data acquisition, 18 Thetaxiing goal Cessna of the‐ secondtype and test corporate was to jet capture airplanes landing were tracked; aircraft, the i.e.,velocity to track of the in taxing 3D. The airplanes system was varied from 8–12 m/s. deployed next to the 9L/27R runway; Figure9b shows the locations of the sensor stations and the The goal of the second test was to capture landing aircraft, i.e., to track in 3D. The system was field ofdeployed views. The next sensor to the 9L/27R configuration runway; Figure is slightly 9b shows different the locations than Test of the #1, sensor as one stations of the and stations the was equippedfield with of views. an extra The sensor VLP-16. configuration The sensor is slightly stations different were locatedthan Test at #1, 40 as m one distance of the stations from was each other, the stationsequipped were with deployed an extra 33VLP m‐16. from The the sensor centerline, stations were and located the effective at 40 m coveragedistance from measured each other, along the centerlinethe wasstations 100 were m. deployed During the33 m six-hour from the datacenterline, acquisition, and the effective 34 landing coverage Cessna-type measured along airplanes the were tracked,centerline the velocity was 100 of the m. During landing the airplanes six‐hour data was acquisition, around 20 34 m/s. landing Cessna‐type airplanes were tracked, the velocity of the landing airplanes was around 20 m/s.

(a)

Figure 9. Cont. Appl. Sci. 2018, 8, 234 12 of 22

Appl. Sci. 2018, 8, x FOR PEER REVIEW 12 of 22

(b)

Figure 9. Test sensor configurations: (a) Test #1 sensor network (taxiing), and (b) Test #2 sensor Figure 9.networkTest sensor configuration configurations: (landing). (a) Test #1 sensor network (taxiing), and (b) Test #2 sensor network configuration (landing). Table 2. Overview of the field tests (* projected to the runway centerline).

Field of Object–Sensor Test Case Goal Sensors Table 2. Overview of the field tests (* projected to the runwayView centerline). * Distance 3 vertical VLP‐16s, #1 Taxiing 2D performance at taxiing 70 m 20 m 1 horizontal HDL‐32E Field of Object–Sensor Test Case Goal Sensors 3D performance at 4 vertical VLP‐16s, View * Distance #2 Landing 100 m 33 m landing 1 horizontal3 vertical HDL VLP-16s,‐32E #1 Taxiing 2D performance at taxiing 70 m 20 m 1 horizontal HDL-32E 4. Results 4 vertical VLP-16s, #2 Landing 3D performance at landing 100 m 33 m The first step of the data processing is the sensor1 horizontal georeferencing, HDL-32E described in Section 2.3. Figure 10 shows the arrangement for one sensor using the calibration board at five different locations; the points captured by the sensor in the sensor coordinate system are shown in red, and the red lines are 4. Resultsthe surface norms of the boards calculated from these points. The blue lines are the norms of the Theboards first step at the of five the locations data processing in the global is the coordinate sensor georeferencing,system; these norms described are derived in Sectionfrom the 2.3 board. Figure 10 corners (blue dots) measured with a total station. Note that the global coordinate system is connected shows the arrangement for one sensor using the calibration board at five different locations; the points to the local coordinate system of which origin is the sensor position calculated with GPS. The capturedtransformation by the sensor parameters in the sensor between coordinate the two systems system are are estimated shown based in red, on andEquation the red(1). In lines the are the surfaceFigure norms 10, of the the green boards color calculated depicts the boards from theseafter transforming points. The the blue sensor lines points are into the the norms global of frame. the boards at the fiveThe locations standard deviation in the global of the coordinateresiduals of the system; parameter these estimation norms areis 0.3°. derived from the board corners (blue dots) measuredOnce all individual with a sensor total station.data were Note transformed that the into global the global coordinate coordinate system system, is the connected dynamic to the local coordinatecontent, i.e., system the aircraft of which body, origin is separated is the from sensor the static position background calculated according with to GPS. Section The 2.3. transformation Three algorithms listed in Table 3 are compared for Test #1. The COG algorithm is a straight‐forward model‐ parametersfree object between tracking the method. two systems In our areimplementation, estimated based the COG on points Equation for each (1). scan In theis calculated Figure 10as ,the the green color depictsaverage the (center boards of gravity) after of transforming the object points. the Then, sensor the pointstrajectory into is derived the global with, frame.first, smoothing The standard ◦ deviationthe of COG the points residuals with of a Gaussian the parameter filter, and estimation then, resampled is 0.3 .with spline interpolation in order for Oncedensifying all individual the trajectory sensor points. data were The transformedsecond investigated into the method global is coordinate the model‐ system,based volume the dynamic content,minimization i.e., the aircraft introduced body, in Section is separated 2.6; a 2D from CA motion the static is assumed background due to the according taxiing scenario to Section in 2.3. Test #1. The third CT algorithm uses the output of a VM algorithm assuming CV motion as initial Three algorithms listed in Table3 are compared for Test #1. The COG algorithm is a straight-forward reconstruction. Then, the CT algorithm refines this solution by estimating a 2D fourth‐degree model-freepolynomial object trackingmotion model. method. Figure In 11 our illustrates implementation, the points of the the airplane COG points body at for different each scan times is and calculated as the averagethen the reconstruction (center of gravity) by the CT of algorithm. the object Figure points. 12 presents Then, thethe reconstructions trajectory is derived for the three with, first, smoothing the COG points with a Gaussian filter, and then, resampled with spline interpolation in order for densifying the trajectory points. The second investigated method is the model-based volume minimization introduced in Section 2.6; a 2D CA motion is assumed due to the taxiing scenario in Test #1. The third CT algorithm uses the output of a VM algorithm assuming CV motion as initial reconstruction. Then, the CT algorithm refines this solution by estimating a 2D fourth-degree polynomial motion model. Figure 11 illustrates the points of the airplane body at different times Appl. Sci. 2018, 8, 234 13 of 22 Appl. Sci. 2018, 8, x FOR PEER REVIEW 13 of 22 andmethods. then the The reconstruction quality of bythe the reconstructions CT algorithm. can Figure be 12easily presents compare the reconstructions by visual assessment. for the three For Appl. Sci. 2018, 8, x FOR PEER REVIEW 13 of 22 methods.trajectory The solutions, quality ofa reference the reconstructions point of the can airplane be easily body compare has to by be visualchosen, assessment. and the lowermost For trajectory point solutions, a reference point of the airplane body has to be chosen, and the lowermost point of the front methods.of the front The wheel quality was of selectedthe reconstructions here. Figure can13 presentsbe easily the compare trajectories by visualcalculated assessment. by the threeFor wheel was selected here. Figure 13 presents the trajectories calculated by the three methods. trajectorymethods. solutions, a reference point of the airplane body has to be chosen, and the lowermost point of the front wheel was selected here. Figure 13 presents the trajectories calculated by the three methods.

Figure 10. Sensor georeferencing: points from the calibration board in the sensor coordinate system Figure 10. Sensor georeferencing: points from the calibration board in the sensor coordinate system depicted with red, the norm of the board surface is shown with red lines, blue dots are the board depicted with red, the norm of the board surface is shown with red lines, blue dots are the board corners measured with total station, the blue lines are the norms of the board surface in the global cornersFigure 10. measured Sensor georeferencing: with total station, points the bluefrom lines the calibration are the norms board of in the the board sensor surface coordinate in the system global coordinate system, green points depict the points after transforming the board points from the sensor coordinatedepicted with system, red, green the norm points of depict the board the points surface after is shown transforming with red the lines, board blue points dots from are thethe sensor board to the global coordinate system, and the red circle marks the sensor position. tocorners the global measured coordinate with system,total station, and the the red blue circle lines marks are the the norms sensor of position. the board surface in the global coordinate system, green points depict the points after transforming the board points from the sensor Table 3. Investigated algorithms for Test #1. to the global coordinate system, and the red circle marks the sensor position. Method Table 3. Investigated algorithms forParameters Test #1. TableThe 3. InvestigatedCOG points algorithms are filtered for with Test #1.a Gaussian kernel and Center of GravityMethod (COG) Parameters Method resampled with spline interpolationParameters The COG points are filtered with a Gaussian kernel and VolumeCenter of Minimization Gravity (COG) with The COG points are filtered with a Gaussian kernel and Center of Gravity (COG) Cube resampledsize is 1 m, with other spline parameters interpolation are presented in Section 2.6. CA motion model resampled with spline interpolation Volume Minimization with CA VolumeRefinement Minimization with Cube with InitialCube reconstruction size is 1 m, other using parameters a CV model are presentedand SA, motion in Section model 2.6. is motion model Cube size is 1 m, other parameters are presented in Section 2.6. CATrajectories motion model fourth‐degree polynomial Initial reconstruction using a CV model and SA, motion model is Refinement with Cube Trajectories Refinement with Cube Initial fourth-degreereconstruction polynomial using a CV model and SA, motion model is Trajectories fourth‐degree polynomial

Figure 11. Green points mark the raw dataset captured from a taxiing aircraft during Test #1, and red points show the reconstructed airplane body using the CT method. Figure 11. Green points mark the raw dataset captured from a taxiing aircraft during Test #1, and red FigureLanding 11. Green airplanes points were mark captured the raw dataset from a captured 33‐m object–sensor from a taxiing distance aircraft during during Test Test #1, #2. and The red point points show the reconstructed airplane body using the CT method. densitypoints is show about the 20 reconstructed points per scan. airplane This body is very using low the point CT method. density for the COG algorithm, and thus, the algorithmLanding airplanes was not wereable tocaptured produce from any aacceptable 33‐m object–sensor results; subsequently, distance during only Test the #2. results The pointof the densityCT Landingalgorithm is about airplanes are 20 presented. points were per captured Figure scan. This 14 from shows is very a 33-m raw low airplane object–sensor point density points distance infor green, the COG during and algorithm, the Test reconstruction #2. The and point thus, in densitytheblue, algorithm and is about Figure was 20 15 pointsnot visualizes able per to scan. producethe reconstructed This any is very acceptable low airplane point results; body density insubsequently, perspective for the COG view.only algorithm, the Finally, results and Figure of thus, the 16 theCTshows algorithmalgorithm the estimated wasare presented. not coordinates able to Figure produce of 14a landing any shows acceptable rawaircraft. airplane results; The redpoints subsequently, line in is green,the initial onlyand VM the the solutionreconstruction results of assuming the CT in blue,CV motion and Figure model. 15 visualizesThe blue line the isreconstructed the refined CT airplane solution. body in perspective view. Finally, Figure 16 shows the estimated coordinates of a landing aircraft. The red line is the initial VM solution assuming CV motion model. The blue line is the refined CT solution. Appl. Sci. 2018, 8, 234 14 of 22 algorithm are presented. Figure 14 shows raw airplane points in green, and the reconstruction in blue, and Figure15 visualizes the reconstructed airplane body in perspective view. Finally, Figure 16 shows the estimated coordinates of a landing aircraft. The red line is the initial VM solution assuming CV motion model. The blue line is the refined CT solution. Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 22 Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 22

Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 22

(a) (b) (c) (a) (b) (c) Figure 12. Reconstructions by the three algorithms for Test #1 from data presented in Figure 11: (a) Figure 12. Reconstructions by the three algorithms for Test #1 from data presented in Figure 11: centerFigure of12. gravity Reconstructions ((COG),a) (b by) volume the three minimization algorithms(b forwith) Test CA #1 motion from data model presented (VM‐CA),(c )in Figureand (c )11: cube (a) (a) center of gravity (COG), (b) volume minimization with CA motion model (VM-CA), and (c) cube trajectoriescenter of gravity (CT). (COG), (b) volume minimization with CA motion model (VM‐CA), and (c) cube trajectoriesFigure 12. Reconstructions (CT). by the three algorithms for Test #1 from data presented in Figure 11: (a) center of gravity (COG), (b) volume minimization with CA motion model (VM‐CA), and (c) cube trajectories (CT).

Figure 13. Trajectory solutions for the three methods. Figure 13. Trajectory solutions for the three methods. Figure 13. Trajectory solutions for the three methods.

Figure 14. Reconstruction from sparse point clouds (average point density is 36/scans or 7.2 Figure 14. Reconstruction from sparse point clouds (average point density is 36/scans or 7.2 points/scan/sensor), using the CT algorithm for a landing Cessna airplane during Test #2: the captured airplanepoints/scan/sensor), points are shown using thein green, CT algorithm the reconstruction for a landing is Cessnain blue. airplane during Test #2: the captured Figure 14. Reconstruction from sparse point clouds (average point density is 36/scans or 7.2 Figureairplane 14. pointsReconstruction are shown in green, from the sparse reconstruction point clouds is in (averageblue. point density is 36/scans or points/scan/sensor), using the CT algorithm for a landing Cessna airplane during Test #2: the captured 7.2 points/scan/sensor), using the CT algorithm for a landing Cessna airplane during Test #2: the airplane points are shown in green, the reconstruction is in blue. captured airplane points are shown in green, the reconstruction is in blue.

Figure 15. Reconstruction from sparse point clouds (total number of points is 4320) from data Figure 15. Reconstruction from sparse point clouds (total number of points is 4320) from data presented in Figure 14: the shape of the plane including the wings, tails are recognizable. presented in Figure 14: the shape of the plane including the wings, tails are recognizable. Figure 15. Reconstruction from sparse point clouds (total number of points is 4320) from data presented in Figure 14: the shape of the plane including the wings, tails are recognizable. Appl. Sci. 2018, 8, x FOR PEER REVIEW 14 of 22

(a) (b) (c)

Figure 12. Reconstructions by the three algorithms for Test #1 from data presented in Figure 11: (a) center of gravity (COG), (b) volume minimization with CA motion model (VM‐CA), and (c) cube trajectories (CT).

Figure 13. Trajectory solutions for the three methods.

Figure 14. Reconstruction from sparse point clouds (average point density is 36/scans or 7.2 Appl. Sci. 2018, 8, 234 15 of 22 points/scan/sensor), using the CT algorithm for a landing Cessna airplane during Test #2: the captured airplane points are shown in green, the reconstruction is in blue.

FigureFigure 15. 15.Reconstruction Reconstruction from from sparse sparse point point clouds clouds (total (total number number of points of ispoints 4320) fromis 4320) data from presented data Appl. Sci.inpresented Figure2018, 8, 14x inFOR: theFigure PEER shape 14: REVIEW of the the shape plane of including the plane the including wings, tailsthe wings, are recognizable. tails are recognizable. 15 of 22

FigureFigure 16. EstimatedEstimated coordinates coordinates of of a a landing landing Cessna Cessna airplane airplane using using VM VM-CV‐CV (red) (red) and and CT CT (blue) (blue) algorithmsalgorithms for for the the X, X, Y, and Z directions; X direction is along the centerlinecenterline ofof thethe runway.runway.

5.5. Discussion Discussion TheThe authors authors introduced introduced the the VM VM algorithm algorithm in in an an earlier earlier study study and and used used this this algorithm algorithm assuming assuming 2D2D constant constant velocity velocity model model [11]. [11]. In In that that study, study, we we showed showed that that the the VM VM outperforms outperforms the the ICP ICP algorithm algorithm based on data captured by a single sensor. The analysis with using GPS installed on the airplane and based on data captured by a single sensor. The analysis with using GPS installed on the airplane used as reference showed that the RMSEs (root‐mean‐square errors) are 0.72 m/s with VM and 5.29 and used as reference showed that the RMSEs (root-mean-square errors) are 0.72 m/s with VM and m/s with ICP even at short (10 m) object–sensor distances. The main reason that the ICP tends to fail 5.29 m/s with ICP even at short (10 m) object–sensor distances. The main reason that the ICP tends to is that it requires corresponding points between two point clouds; point clouds acquired by the fail is that it requires corresponding points between two point clouds; point clouds acquired by the Velodyne scanners have relatively low point density or resolution, especially when the object–sensor Velodyne scanners have relatively low point density or resolution, especially when the object–sensor distance is large. Consequently, capturing “good” corresponding points is not possible. distance is large. Consequently, capturing “good” corresponding points is not possible. While the VM performed well for the CV model in our earlier studies, later it turned out that the While the VM performed well for the CV model in our earlier studies, later it turned out VM approach cannot adequately handle more complex motions due to the structure of the objective that the VM approach cannot adequately handle more complex motions due to the structure of function at small scale. In our next study, we tried to address this issue of approximating the general the objective function at small scale. In our next study, we tried to address this issue of approximating motion model by applying the constant velocity model for shorter time periods [12]. This solution the general motion model by applying the constant velocity model for shorter time periods [12]. performed well, yet it was not robust. This study introduced cube trajectories to improve the motion This solution performed well, yet it was not robust. This study introduced cube trajectories to improve estimation. the motion estimation. Results have to be compared in terms of accuracy, reliability under various aircraft motions. In the following, two indicators are used for qualitatively assessing and comparing the performance.  The reconstruction quality is one indicator that can be used to compare the various algorithms and to assess the performance in general. If the reconstruction is good, the aircraft and its details are recognizable, indicating that the derived motion pattern is also close to the actual trajectory. A fuzzy, blurry reconstruction indicates that the estimated motion is not adequate. The reconstructions of the COG, VM‐CA and CT algorithms in Figure 15 show that VM and CT outperform COG and CT provides a better reconstruction than VM. The CT reconstruction is fuzzy compared to the reconstruction provided by the CT algorithm. The reason of the underperformance of the COG method is that different body points are backscattered at different epochs, and consequently, the COG does not accurately represent the center of gravity of the airplane body. Clearly, COG tends to fail when the scans contain small number of points from the tracked object. This is also the case for Test #2, when the object sensor distance is around 33 m, and thus, not enough points are captured from the Cessna‐size airplanes. In addition, the sensor network is installed on one side of the runway, which means that the sensors capture points from one side of the airplane. This clearly pulls the COG to the direction of that of the Appl. Sci. 2018, 8, 234 16 of 22

Results have to be compared in terms of accuracy, reliability under various aircraft motions. In the following, two indicators are used for qualitatively assessing and comparing the performance. • The reconstruction quality is one indicator that can be used to compare the various algorithms and to assess the performance in general. If the reconstruction is good, the aircraft and its details are recognizable, indicating that the derived motion pattern is also close to the actual trajectory. A fuzzy, blurry reconstruction indicates that the estimated motion is not adequate. The reconstructions of the COG, VM-CA and CT algorithms in Figure 15 show that VM and CT outperform COG and CT provides a better reconstruction than VM. The CT reconstruction is fuzzy compared to the reconstruction provided by the CT algorithm. The reason of the underperformance of the COG method is that different body points are backscattered at different epochs, and consequently, the COG does not accurately represent the center of gravity of the airplane body. Clearly, COG tends to fail when the scans contain small number of points from the tracked object. This is also the case for Test #2, when the object sensor distance is around 33 m, and thus, not enough points are captured from the Cessna-size airplanes. In addition, the sensor network is installed on one side of the runway, which means that the sensors capture points from one side of the airplane. This clearly pulls the COG to the direction of that of the sensors. In contrast to COG, the CT is able to provide a fairly good quality reconstruction (see Figure 15), indicating that the algorithm performs well even if the scans contain a small number of points. • The shape of the trajectory and the velocity profiles can be also used to assess the performance. For instance, in Figure 13 for Test #1, the trajectory derived from COG is not realistic, it is not possible that the airplane could have followed such a curvy trajectory. Note that large tail at the end of the trajectory, which is definitely not feasible motion, is caused by the lack of a sufficient number of captured points. In contrast, the VM-CA and CT results are feasible trajectories. However, it is unclear whether the small “waves” of the CT trajectory is the actual motion of the airplane, or it is the result of overfitting due to fourth-degree motion model. Nevertheless, the largest difference between the two trajectories is about 10 cm, which is within the error envelope detected earlier in our previous studies. The trajectory along the X, Y, and Z directions for Test #2 can be seen in Figure 16. This figure shows the decreasing altitude (see the Z component) as expected for landing aircraft. Also note that the CV model depicted with red in the Figure 16 is a line; however, it is very unlikely that this motion component behaves such a way. The CT solution depicted with blue follows a more realistic trajectory. Unfortunately, reference solution, for instance, from GPS, is not available for the quantitative analysis of the methods. However, the residuals from the CT algorithm can describe the “internal” accuracy between the measured cube velocities and the underlying motion model. The average and standard deviation of the absolute residuals from the CT algorithm are presented in Table4.

Table 4. Quantitative statistics of the CT method.

Average of the Absolute Residuals Standard Deviation of the Absolute Residuals Test X (m/s) Y (m/s) Z (m/s) X (m/s) Y (m/s) Z (m/s) #1 Taxiing 0.69 0.06 - 0.44 0.00 - #2 Landing 0.67 0.44 0.51 0.17 0.08 0.11

The statistics of the average absolute residuals (0.06–0.69 m/s) are similar to those we experienced earlier for simpler motions. The X velocity errors are dominant for both cases, but significantly larger than the Y component in the case of the Taxiing data; note that the X axis is the direction of landing and taxiing. The accuracies of the Y and Z components are similar. The standard deviation is also relatively low (0.08–0.17 m). Note that the residuals are velocities, and do not describe the positioning accuracy directly. The sampling frequency of the sensors is 10 Hz, and thus, the positioning error is supposed to be around one to seven centimeters based on these velocity residuals. Appl. Sci. 2018, 8, 234 17 of 22 Appl. Sci. 2018, 8, x FOR PEER REVIEW 17 of 22

Limitationsseem to adequate and Possible for Applicationsaircraft model identification. Note that the scale is preserved in LiDAR data, making object identification easier. Clearly, machine learning or other algorithms can be developed The prototype system allows for real-time tracking of aircraft during taxiing, landing or takeoffs. to identify the types of the incoming airplanes captured by the LiDAR sensor network [9]. In addition, Figure 17 shows an example of a user interface, where a landing aircraft is tracked, and the LiDAR as remote sensing technology, can also be used for monitoring the surroundings to detect and demonstrationidentify hazardous system objects provides to improve the velocity,the safety altitudeof apron operations and position (see [9,10]). including the error envelopes along withBesides video the feedback.possible applications Currently, presented the main above, limitation the original of the intent developed of this system effort was is the to small observationdevelop a space, LiDAR which‐based is prototype an affordability system issue. to demonstrate To cover larger the feasibility sections along that airplanes runways can and be taxiways wouldidentified require and large tracked number in airport of sensors, environment which wouldwithout increase any cooperation the price from of the the system aircraft. and Note the that operation costs.aircraft However, navigation mobile systems LiDAR provide scanners, this information such as Velodyne to the crew, sensors, but this aredata expected is not shared to become by airport cheaper in theauthorities. coming yearsUsing dueremote to the technology increasing provides needs fornon mass‐cooperative production alternative boosted for by collecting the autonomous huge car industry.amount There of motion are other parameters applications that can where improve the airport proposed safety system through is helping applicable understand at airports. and assess For instance, the risk of the pilot’s driving patterns on the runways and taxiways (e.g., centerline deviation, LiDAR might offer tracking solution in night or bad visibility conditions, when optical sensing is wingspan separation). This information can be used in aircraft and airport planning, e.g., whether compromised.the airport meets Obviously, the standardized LiDAR will criteria fail or in providing extreme weatherdata for modification conditions, of such airport as heavy standards raining or snowing.and pilot With education. improving To develop resolution adequate and coverage, risk models aircraft for taxiing types behavior, can be the Chou detected et al. andused models identified.positioning Figure gauges 18 to shows obtain data the reconstructionat the Chiang‐Kai of‐Shek a Cessna International and a Airport Learjet, [24]. where Another details study seem to adequatefrom FAA/Boeing for aircraft investigated model identification. the 747s’ centerline Note deviations that the scale at JFK is and preserved ANC airports in LiDAR [25–27]. data, They making objectused identification laser diodes easier.at two Clearly, locations machine to measure learning the location or other of algorithms the nose and can bemain developed gears. These to identify theapproaches types of the allow incoming only for airplanes accurate captured on‐ground by theaircraft LiDAR tracking sensor as networkopposed [to9]. the In addition,LiDAR‐based LiDAR as remotetracking sensing system technology, that is also capable can also of be tracking used forduring monitoring landing or the takeoffs surroundings with high toaccuracy. detect Figure and identify hazardous19 shows objects the trajectories to improve of all the landing safety airplanes of apron during operations Test #2. (see [9,10]).

(a)

(b) (c)

Figure 17. Aircraft tracking: (a) current video frame, (b) altitude profile with blue dashed line showing the error envelope in cm, red line is the runway surface, and (c) 2D trajectory. Appl. Sci. 2018, 8, 234 18 of 22

Besides the possible applications presented above, the original intent of this effort was to develop a LiDAR-based prototype system to demonstrate the feasibility that airplanes can be identified and tracked in airport environment without any cooperation from the aircraft. Note that aircraft navigation systems provide this information to the crew, but this data is not shared by airport authorities. Using remote technology provides non-cooperative alternative for collecting huge amount of motion parameters that can improve airport safety through helping understand and assess the risk of the pilot’s driving patterns on the runways and taxiways (e.g., centerline deviation, wingspan separation). This information can be used in aircraft and airport planning, e.g., whether the airport meets the standardized criteria or providing data for modification of airport standards and pilot education. To develop adequate risk models for taxiing behavior, Chou et al. used positioning gauges to obtain data at the Chiang-Kai-Shek International Airport [24]. Another study from FAA/Boeing investigated the 747s’ centerline deviations at JFK and ANC airports [25–27]. They used laser diodes at two locations to measure the location of the nose and main gears. These approaches allow only for accurate Appl. Sci. 2018, 8, x FOR PEER REVIEW 18 of 22 Appl.on-ground Sci. 2018, aircraft 8, x FOR trackingPEER REVIEW as opposed to the LiDAR-based tracking system that is also capable18 of of22 tracking during landing or takeoffs with high accuracy. Figure 19 shows the trajectories of all landing Figure 17. Aircraft tracking: (a) current video frame (), (b) altitude profile with blue dashed line airplanesFigure during 17. Aircraft Test #2. tracking: (a) current video frame (), (b) altitude profile with blue dashed line showing the error envelope in cm, red line is the runway surface, and (c) 2D trajectory. showing the error envelope in cm, red line is the runway surface, and (c) 2D trajectory.

(a) (b) (a) (b) Figure 18. Reconstruction of different aircraft types, (a) Cessna and (b) Learjet. Figure 18. ReconstructionReconstruction of different aircraft types, (a) Cessna and (b) Learjet.

Figure 19. Altitude profiles of 29 landing airplanes from Test #2. Figure 19. Altitude profiles of 29 landing airplanes from Test #2. Figure 19. Altitude profiles of 29 landing airplanes from Test #2. 6. Conclusions 6. Conclusions 6. ConclusionsThe paper discussed the feasibility of using a LiDAR sensor network to accurately track taxiing The paper discussed the feasibility of using a LiDAR sensor network to accurately track taxiing and landing aircraft. The used sensors—the Velodyne VLP‐16 and HDL‐32E—provide high sampling and landingThe paper aircraft. discussed The used the feasibility sensors—the of using Velodyne a LiDAR VLP‐ sensor16 and network HDL‐32E—provide to accurately high track sampling taxiing rates to capture an aircraft during taxiing or landing with relatively high speed; however, the ratesand landing to capture aircraft. an Theaircraft used during sensors—the taxiing Velodyne or landing VLP-16 with and relatively HDL-32E—provide high speed; highhowever, sampling the captured point clouds are sparse. The main reason for the small number of LiDAR points captured capturedrates to capture point clouds an aircraft are duringsparse. taxiing The main or landing reason withfor the relatively small number high speed; of LiDAR however, points the captured captured from the aircraft body is the large, ~20–30 m object–sensor distance. This issue is addressed in the pointfrom the clouds aircraft are sparse.body is The the mainlarge, reason ~20–30 for m theobject–sensor small number distance. of LiDAR This pointsissue is captured addressed from in the paper by presenting two novel model‐based tracking algorithms. First, the VM algorithm is paper by presenting two novel model‐based tracking algorithms. First, the VM algorithm is developed to find the parameters of a simple motion model, such as CV or CA, with minimizing the developed to find the parameters of a simple motion model, such as CV or CA, with minimizing the reconstruction volume. Then, the CT algorithm uses these reconstructions, as an initial solution, to reconstruction volume. Then, the CT algorithm uses these reconstructions, as an initial solution, to find the parameters of the general motion model. The proposed method outperformed the COG find the parameters of the general motion model. The proposed method outperformed the COG method, and could achieve one‐ to seven‐centimeter tracking accuracy. The VM and CT algorithms method, and could achieve one‐ to seven‐centimeter tracking accuracy. The VM and CT algorithms perform really well on the acquired datasets. However, the dynamics of the motion in the presented perform really well on the acquired datasets. However, the dynamics of the motion in the presented examples is low, and therefore, it is required to assess the performance for more complex trajectories. examples is low, and therefore, it is required to assess the performance for more complex trajectories. Currently, sharp turning scenarios may not be handled with the presented algorithm. In the future, Currently, sharp turning scenarios may not be handled with the presented algorithm. In the future, we plan to extend the algorithms considering these motions in the model to be able to handle more we plan to extend the algorithms considering these motions in the model to be able to handle more complex maneuvers. The developed approach might be also used in other applications outside of the complex maneuvers. The developed approach might be also used in other applications outside of the aviation domain, such as tracking vehicle traffic, etc. aviation domain, such as tracking vehicle traffic, etc. Supplementary Materials: C++ code with Matlab interface and sample data are available online at Supplementary Materials: C++ code with Matlab interface and sample data are available online at https://github.com/zkoppanyi/cas. https://github.com/zkoppanyi/cas. Acknowledgments: The authors would like to thank the Ohio State University Airport as the test site for this Acknowledgments: The authors would like to thank the Ohio State University Airport as the test site for this research and the Federal Aviation Administration who supported this work through the Center of Excellence research and the Federal Aviation Administration who supported this work through the Center of Excellence for General Aviation, known as PEGASAS, the Partnership to Enhance General Aviation Safety Accessibility, for General Aviation, known as PEGASAS, the Partnership to Enhance General Aviation Safety Accessibility, Appl. Sci. 2018, 8, 234 19 of 22 aircraft body is the large, ~20–30 m object–sensor distance. This issue is addressed in the paper by presenting two novel model-based tracking algorithms. First, the VM algorithm is developed to find the parameters of a simple motion model, such as CV or CA, with minimizing the reconstruction volume. Then, the CT algorithm uses these reconstructions, as an initial solution, to find the parameters of the general motion model. The proposed method outperformed the COG method, and could achieve one- to seven-centimeter tracking accuracy. The VM and CT algorithms perform really well on the acquired datasets. However, the dynamics of the motion in the presented examples is low, and therefore, it is required to assess the performance for more complex trajectories. Currently, sharp turning scenarios may not be handled with the presented algorithm. In the future, we plan to extend the algorithms considering these motions in the model to be able to handle more complex maneuvers. The developed approach might be also used in other applications outside of the aviation domain, such as tracking vehicle traffic, etc.

Supplementary Materials: C++ code with Matlab interface and sample data are available online at https://github.com/zkoppanyi/cas. Acknowledgments: The authors would like to thank the Ohio State University Airport as the test site for this research and the Federal Aviation Administration who supported this work through the Center of Excellence for General Aviation, known as PEGASAS, the Partnership to Enhance General Aviation Safety Accessibility, and Sustainability. Although the FAA has sponsored this project, it neither endorses nor rejects the findings of this research. Author Contributions: C. T. conceived the experiment. Z. K. conceived the sensor georeferencing. Z. K. and C. T. designed and performed the experiment. Z. K. conducted the data processing, analysis, and developed the algorithms. Z. K. and C. T. wrote the paper. Conflicts of Interest: The authors declare no conflict of interest.

Appendix A This appendix explains some basic definitions used throughout the paper. A point captured by a LiDAR sensor is given with its X, Y and Z coordinates as well as the time when it was captured, i.e., p = [x, y, z, t] ∈ P, where P is the set of points. A P ∈ C = 2P set is a point cloud, where C is the set of point clouds and 2P is the power set of P. Consequently, P contains no or many points. A scan is a point cloud collected by the sensor when it makes a complete 360◦ scan starting from 0◦. Note that the times of the points in a scan might be different but has to be within the δt sampling time; this is the time to make a full 360◦ scan. The reverse direction is not valid, points within the sampling time might not correspond to the same scan. The data collected by the sensor can be split into scans, and consequently, the collected data is a series of consecutive scans. The reconstruction or reconstructed point cloud is a point cloud that contains several transformed scans. In the paper, the scans are typically transformed based on a chosen motion model, resulting in a reconstruction.

Appendix B The applied motion models are very simple, yet his appendix summarizes these basic formulas for the sake of completeness. The general 2D motion considering a pointwise particle can be described as time variable polynomial functions:

2 n 2 n x(t) = x0 + a1t + a2t + ... + ant ,y(t) = y0 + b1t + b2t + ... + bnt . (A1) where, a0, ... , an and b0, ... , bn are the parameters of the motion. This model can be extended with the Z coordinate in the case of 3D motions, such that

2 n z(t) = z0 + c1t + c2t + ... + cnt . (A2) Appl. Sci. 2018, 8, 234 20 of 22

The model can be simplified that results in the constant acceleration (CA) model for curvilinear motions: t t t x(t) = x + v t + a ,y(t) = y + v t + a ,z(t) = z + v t + a . (A3) 0 x x 2 0 y y 2 0 z z 2 where x0, y0, z0 are the starting position, vx, vy, vz are the velocities, ax, ay, az are the accelerations along X, Y, Z directions, and t is the time. Finally, the 3D constant velocity (CV) model is

x(t) = x0 + vxt,y(t) = y0 + vyt,z(t) = z0 + vzt (A4)

A p = [x, y, z, t] ∈ P point from the P ∈ C cloud can be transformed back to zero time. For instance, for the general motion model, with knowing the a0, ... , an, b0, ... , bn, c0, ... , cn motion parameters, the transformation takes the following form:

2 n 2 n 2 n x0 = x(t) − a1t − a2t − ... − ant ,y0 = y(t) − b1t − b2t − ... − bnt ,z0 = z(t) − c1t − c2t − ... − cnt . (A5) Applying these equations for all captured points results in a point cloud called reconstructed point cloud or reconstruction.

Appendix C Algorithm A1 presents the steps for the simulated annealing method to solve volume minimization defined in Equation (2).

Algorithm A1 Simulated annealing algorithm for volume minimization 1. Input: n n 2.P ∈ C (point cloud), T ∈ (P, R, R ) → P (motion model), r0 ∈ R (initial search boundaries) 3. tinit ∈ R (initial temperature), rmin ∈ R (minimum temperature), imax ∈ Z+ (maximum iteration number), 4. c ∈ [01] (cooling coefficient), nn ∈ Z+ (number of neighborhoods) 5. Initialization:

6. i ← 0; t ← tinit xl +xu 7. x, xbest ← 2 8. v ← Vol(P) 9. Iterations:

10. while ri > rmin 11. t ← ci // Decreasing the temperature

12. ri = r0 * t // Searching radius for inner iteration 13. k ← 0

14. while k < nn 0 15. x ← xbest + ri ∗ rnd() // Pick a neighbor randomly

16. P0 ← Upt ∈PT(pt, t, x) // Calculate the cloud based on the motion model 17. v0 ← Vol(P0) // Calculate volume 18. if v0 < v or exp(−|e0 − e|)) > rnd() // Update local best 19. x ← x0 20. v ← v0 21. end if 22. k ← k + 1 23. end while

24. xbest ← x // Update best 25. i ← i + 1 26. end while

27. Output: xbest Appl. Sci. 2018, 8, 234 21 of 22

References

1. Toth, C.; Jó´zków, G. Remote sensing platforms and sensors: A survey. ISPRS J. Photogramm. Remote Sens. 2016, 115, 22–36. [CrossRef] 2. Navarro-Serment, L.E.; Mertz, C.; Hebert, M. Pedestrian Detection and Tracking Using Three-Dimensional LADAR Data. In Field and Service Robotics; Springer Tracts in Advanced Robotics; Springer: Berlin, Germany, 2010; pp. 103–112, ISBN 978-3-642-13407-4. 3. Cui, J.; Zha, H.; Zhao, H.; Shibasaki, R. Laser-based detection and tracking of multiple people in crowds. Comput. Vis. Image Underst. 2007, 106, 300–312. [CrossRef] 4. Sato, S.; Hashimoto, M.; Takita, M.; Takagi, K.; Ogawa, T. Multilayer lidar-based pedestrian tracking in urban environments. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, San Diego, CA, USA, 21–24 June 2010; pp. 849–854. 5. Tarko, P.; Ariyur, K.B.; Romero, M.A.; Bandaru, V.K.; Liu, C. Stationary LiDAR for Traffic and Safety Applications–Vehicles Interpretation and Tracking; USDOT: Washington, DC, USA, 2014. 6. Börcs, A.; Nagy, B.; Benedek, C. On board 3D object perception in dynamic urban scenes. In Proceedings of the 2013 IEEE 4th International Conference on Cognitive Infocommunications (CogInfoCom), Budapest, Hungary, 2–5 December 2013; pp. 515–520. 7. Toth, C.K.; Grejner-Brzezinska, D. Extracting dynamic spatial data from airborne imaging sensors to support traffic flow estimation. ISPRS J. Photogramm. Remote Sens. 2006, 61, 137–148. [CrossRef] 8. Statistical Summary of Commercial Jet Airplane Accidents Worldwide Operations. Available online: http://www.boeing.com/resources/boeingdotcom/company/about_bca/pdf/statsum.pdf (accessed on 16 December 2017). 9. Mund, J.; Frank, M.; Dieke-Meier, F.; Fricke, H.; Meyer, L.; Rother, C. Introducing LiDAR Point Cloud-based Object Classification for Safer Apron Operations. In Proceedings of the International Symposium on Enhanced Solutions for Aircraft and Vehicle Surveillance Applications, Berlin, Germany, 7–8 April 2016. 10. Mund, J.; Zouhar, A.; Meyer, L.; Fricke, H.; Rother, C. Performance Evaluation of LiDAR Point Clouds towards Automated FOD Detection on Airport Aprons. In Proceedings of the 5th International Conference on Application and Theory of Automation in Command and Control Systems, Toulouse, France, 30 September–2 October 2015. 11. Koppanyi, Z.; Toth, C. Estimating Aircraft Heading based on Laserscanner Derived Point Clouds. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Munich, Germany, 25–27 March 2015; Volume II-3/W4, pp. 95–102. 12. Toth, C.; Jozkow, G.; Koppanyi, Z.; Young, S.; Grejner-Brzezinska, D. Monitoring Aircraft Motion at Airports by LiDAR. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016; Volume III-1, pp. 159–165. 13. VLP-16 Manual. Available online: http://velodynelidar.com/vlp-16.html (accessed on 15 December 2017). 14. HDL-32E Manual. Available online: http://velodynelidar.com/hdl-32e.html (accessed on 15 December 2017). 15. Wang, D.Z.; Posner, I.; Newman, P. Model-free Detection and Tracking of Dynamic Objects with 2D Lidar. Int. J. Robot. Res. 2015, 34, 1039–1063. [CrossRef] 16. Pomerleau, F.; Colas, F.; Siegwart, R. A Review of Point Cloud Registration Algorithms for Mobile Robotics. Found. Trends Robot. 2015, 4, 1–104. [CrossRef] 17. Moosmann, F.; Stiller, C. Joint self-localization and tracking of generic objects in 3D range data. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 1146–1152. 18. Dewan, A.; Caselitz, T.; Tipaldi, G.D.; Burgard, W. Motion-based detection and tracking in 3D LiDAR scans. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 4508–4513. 19. Kaestner, R.; Maye, J.; Pilat, Y.; Siegwart, R. Generative object detection and tracking in 3D range data. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3075–3081. Appl. Sci. 2018, 8, 234 22 of 22

20. Dierenbach, K.; Jozkow, G.; Koppanyi, Z.; Toth, C. Supporting Feature Extraction Performance Validation by Simulated Point Cloud. In Proceedings of the ASPRS 2015 Annual Conference, Tampa, FL, USA, 4–8 May 2015. 21. Saez, J.M.; Escolano, F. Entropy Minimization SLAM Using Stereo Vision. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 36–43. 22. Saez, J.M.; Hogue, A.; Escolano, F.; Jenkin, M. Underwater 3D SLAM through entropy minimization. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 3562–3567. 23. Aarts, E.; Korst, J.; Michiels, W. Simulated Annealing. In Search Methodologies; Springer: Boston, MA, USA, 2005; pp. 187–210, ISBN 978-0-387-23460-1. 24. Chou, C.P.; Cheng, H.J. Aircraft Taxiing Pattern in Chiang Kai-Shek International Airport. In Airfield and Highway Pavement: Meeting Today’s Challenges with Emerging Technologies, Proceedings of the Airfield and Highway Pavements Specialty Conference, Atlanta, GA, USA, 30 April–3 May 2006; American Society of Civil Engineers: Reston, VA, USA, 2016. [CrossRef] 25. Sholz, F. Statistical Extreme Value Analysis of ANC Taxiway Centerline Deviation for 747 Aircraft; FAA/Boeing Cooperative Research and Development Agreement; Federal Aviation Administration: Washington, DC, USA, 2003. 26. Sholz, F. Statistical Extreme Value Analysis of JFK Taxiway Centerline Deviation for 747 Aircraft; FAA/Boeing Cooperative Research and Development Agreement; Federal Aviation Administration: Washington, DC, USA, 2003. 27. Sholz, F. Statistical Extreme Value Concerning Risk of Wingtip to Wingtip or Fixed Object Collision for Taxiing Large Aircraft; Federal Aviation Administration: Washington, DC, USA, 2005.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).