sensors

Article Emergency Response Using Volunteered Passenger Aircraft Remote Sensing Data: A Case Study on Flood Damage Mapping

Chisheng Wang 1,2, Junzhuo Ke 1,* , Wenqun Xiu 3, Kai Ye 3 and Qingquan Li 1 1 Key Laboratory of Urban Informatics, School of Architecture & Urban Planning, University, Shenzhen 518000, ; [email protected] (C.W.); [email protected] (Q.L.) 2 Key Laboratory of Urban Land Resources Monitoring and Simulation, Ministry of Land and Resources, Shenzhen 518000, China 3 Shenzhen Urban Public Safety and Technology Institute, Shenzhen, Guangdong, China, Shenzhen 518000, China; [email protected] (W.X.); [email protected] (K.Y.) * Correspondence: [email protected]

 Received: 20 August 2019; Accepted: 24 September 2019; Published: 25 September 2019 

Abstract: Current satellite remote sensing data still have some inevitable defects, such as a low observing frequency, high cost and dense cloud cover, which limit the rapid response to ground changes and many potential applications. However, passenger aircraft may be an alternative remote sensing platform in emergency response due to the high revisit rate, dense coverage and low cost. This paper introduces a volunteered passenger aircraft remote sensing method (VPARS) for emergency response. It uses the images captured by the passenger volunteers during flight. Based on computer vision algorithms and geocoding procedures, these images can be processed into a mosaic orthoimage for rapid ground disaster mapping. Notable, due to the relatively low flight latitude, small clouds can be easily removed by stacking multi-angle tilt images in the VPARS method. A case study on the 2019 Guangdong flood monitoring validates these advantages. The frequent aircraft revisit time, intensive flight coverage, multi-angle images and low cost of the VPARS make it a potential way to complement traditional remote sensing methods in emergency response.

Keywords: emergency response; volunteered remote sensing; passenger aircraft; flood damage mapping

1. Introduction As early as 1997, Kuroda proposed a global disaster observation satellite system (GDOS) to observe disasters through remote sensing [1]. After decades of development, modern remote sensing technology has made great progress. From the earliest digital imaging, remote sensing technology has been developed with higher spatial resolution, spectral resolution and time resolution, and have been applied in many disaster monitoring fields. In 2008, China launched a microsatellite environmental and disaster monitoring constellation, including two small satellites. One of them was equipped with an interferometric imaging spectrometer (IFIS), which has a good application prospect in environmental and disaster monitoring [2]. The high-resolution digital elevation models and imaging spectra obtained from modern remote sensing technique are playing a significant role in risk assessment of fires, earthquakes and other disasters [3,4]. The development of interferometric synthetic aperture radar (InSAR) remote sensing provides an effective way for ground data measurement and deformation observation with sub-centimeter accuracy. For flood disasters, the flood and landslide areas can be demarcated for assessment and land environment detection [5]. For forest fires, satellite data can be used to identify and map forest fires and record the frequency of different vegetation types or areas

Sensors 2019, 19, 4163; doi:10.3390/s19194163 www.mdpi.com/journal/sensors Sensors 2019, 19, 4163 2 of 15 affected [6]. Assisted by geographic information systems, the remote sensing data can be used to predict the factors affecting fire occurrence and understand the dynamic range of fire. Lee used GIS and remote sensing technologies to assess the hazards of the Penang landslide, Malaysia, and determined the location of the landslide within the study area through aerial photos and field surveys [7]. Although great progress has been made in modern remote sensing, large-scale disaster response data acquisition based on satellite and airborne platforms remains difficult [8,9]. The high cost and lack of operational flexibility make satellite remote sensing time-sensitive [10]. To achieve a short revisit time, Planet Labs has used several inexpensive microsatellites to provide a complete image of the earth [11], but it is still unable to respond quickly to floods. On the other hand, because the satellite is high above the ground, cloudy weather during rainy days prevents the camera from taking pictures of the ground through the clouds. However, we find passenger aircraft could be a proper remote sensing platform to overcome these limitations. The civil aviation industry has developed to a high level since the 21st century, with up to 8000 routes and nearly 20,000 flights each moment, covering nearly 200 countries and regions, as well as 1700 airports [12]. Using onboard cameras, video or photos of the ground from takeoff to landing can be easily obtained. Lower flight heights and a dense number of flights make aircraft easier to get usable data from. Numerous attempts have been proposed to use the volunteered geographic information (VGI) for flood damage mapping [13–15]. Although the quality of volunteered data is not comparable with professional remote sensing, the development of computer image processing algorithms, such as structure from motion (SfM) [16,17], make it possible to retrieve useful geographic information from them. In this paper, we propose a novel volunteered remote sensing method using passenger aircraft platforms (VPARS). We firstly studied how to generate standard remote sensing products from the images taken in passenger aircraft. Potential applications on flood disaster observations were investigated by a case study of the 2019 Guangdong flood damage mapping. The case study shows that the VPARS can clearly map the flooded area and overcome the cloud cover limitation to some degree. We concluded that the VPARS method can complement the shortcomings of traditional remote sensing in flood damage mapping and has great potential for other emergency response scenarios.

2. Method In the VPARS method, there are mainly three steps to generate the regular remote sensing products. The first step is the data preparation where the positions and interior orientation of the camera for each photo are initialized. Secondly, the structure from motion processing is implemented to estimate the three-dimensional structures and refine the camera positions and orientation. Finally, based on the obtained camera parameters and three-dimensional ground surface, remote sensing products can be generated by the orthorectification and mosaic of the original photos.

2.1. Initialization of Image Positions and Interior Orientation Parameters After decades of development, the modern aircraft is equipped with many advanced equipment, including high definition cameras and modern radiocommunication systems. Although cameras on aircraft can clearly capture high-quality images of the ground, these images lack the corresponding coordinate information for matching. Fortunately, the flight line coordinates of modern aircrafts can be acquired from the Aircraft Communication Addressing and Reporting System (ACARS) and Automatic Dependent Surveillance Broadcast (ADS-B), which is collected by commercial companies for flight tracking services, and we can normally obtain the GPS positions of a flight with a certain sampling frequency (e.g., ~1/30 Hz in FlightAware [12]) from this information. As for the stable speed of aircraft, the initial camera position can be calculated by the piecewise linear interpolation method (Figure1). SensorsSensors2019 2019,,19 19,, 4163 x FOR PEER REVIEW 33 ofof 1515

Sensors 2019, 19, x FOR PEER REVIEW 3 of 15

Figure 1. Piecewise linear interpolation of the discrete aircraft positions downloaded from FlightAware. BlueFigure asterisks 1. Piecewise are the original linear flightinterpolation tracking pointsof the with discrete position aircraft information, positions and blackdownloaded points are from the imageFlightAware. locations Blue interpolated asterisks are from the original original flight flight tracking tracking points. points with position information, and black points are the image locations interpolated from original flight tracking points. To facilitate conjugate point searching, we also need the approximate interior orientation parameters,To facilitate including conjugate the sensor point sizesearching, and focal we length.also need These the parametersapproximate can interior be read orientation from the exchangeableparameters, including image file the (EXIF) sensor header size of and the images.focal length. In the These next step,parameters the more can precise be read values from of thethe sensorexchangeableFigure size, focal1. imagePiecewise length file and (EXIF)linear other interpolationheader interior of the camera of images. the orientation discrete In the nextaircraft parameters step, positions the will more bedownloaded precise determined values from by of SfM the FlightAware. Blue asterisks are the original flight tracking points with position information, and black processingsensor size, [ 18focal]. length and other interior camera orientation parameters will be determined by SfM processingpoints are [18]. the image locations interpolated from original flight tracking points. 2.2. Structure from Motion Processing 2.2. StructureTo facilitate from conjugateMotion Processing point searching, we also need the approximate interior orientation parameters,SfM is a three-dimensionalincluding the sensor reconstruction size and focal method length. based These on unordered parameters images, can andbe manyread from software the programsexchangeable are image available file for(EXIF) SfM header processing, of the e.g.,images. Pix4d, In the VisualSFM, next step, Agisoftthe more Photoscan precise values and Menciof the APS.sensor We size, adopted focal length Agisoft and Photoscan other interior for SfM camera processing orientation in this parameters work [19 will]. Before be determined performing by coreSfM calculations,processing [18]. appropriate images were selected to determine the camera parameters. Considering the algorithms used in this software are not fully described, here we present the SfM workflow as follows (Figure2.2. Structure2): from Motion Processing

Figure 2. Structure from motion (SfM) workflow.

SfM is a three-dimensional reconstruction method based on unordered images, and many software programs are available for SfM processing, e.g., Pix4d, VisualSFM, Agisoft Photoscan and Menci APS. We adopted Agisoft Photoscan for SfM processing in this work [19]. Before performing core calculations, appropriate images were selected to determine the camera parameters. Considering the algorithms used in this Figuresoftware 2. Structure are not from fully motiondescribed, (SfM) here workflow.workflow. we present the SfM workflow as follows (Figure 2): StepSfM 1:is Detecta three-dimensional features in each reconstruction image by using method the SIFT based algorithm on unordered [20]. In this images, step, the and Gaussian many Step 1: Detect features in each image by using the SIFT algorithm [20]. In this step, the Gaussian functionsoftware wasprograms used to are convolve available the for input SfM image. processing The Gaussian, e.g., Pix4d, pyramid VisualSFM, was obtained Agisoft after Photoscan repeatedly and function was used to convolve the input image. The Gaussian pyramid was obtained after repeatedly applyingMenci APS. convolution. We adopted By Agisoft subtracting Photoscan the adjacent for SfM levelprocessing of the Gaussianin this work pyramid, [19]. Before the di performingfference of applying convolution. By subtracting the adjacent level of the Gaussian pyramid, the difference of Gaussiancore calculations, (DOG) canappropriate be obtained. images Key were locations selected were to determine then defined the ascamera maxima parameters. and minima Considering from the Gaussian (DOG) can be obtained. Key locations were then defined as maxima and minima from the resultthe algorithms of the DOG used function. in this software Low-contrast are not points fully were described, discarded, here andwe thepresent rest ofthe them SfM wereworkflow located as result of the DOG function. Low-contrast points were discarded, and the rest of them were located asfollows stable (Figure feature 2): points. Next, one or more orientations were calculated for each feature point (Aij), as stable feature points. Next, one or more orientations were calculated for each feature point ( 𝐴 ), whereStep the image1: Detect gradient features magnitude in each image (Mij) andby using orientation the SIFT (R ijalgorithm) were computed [20]. In this using step, pixel the di Gaussianfferences: where the image gradient magnitude ( 𝑀 ) and orientation ( 𝑅 ) were computed using pixel function was used to convolve the input image. The Gaussian pyramid was obtained after repeatedly differences: r applying convolution. By subtracting the adjacent level2  of the Gaussian2 pyramid, the difference of Mij = Aij Ai+1,j + Aij Ai,j+1 , (1) Gaussian (DOG) can be obtained. Key locations− were then defined− as maxima and minima from the 𝑀 = (𝐴 − 𝐴,) +(𝐴 − 𝐴,) , (1) result of the DOG function. Low-contrast points were discarded, and the rest of them were located Rij = atan2 Aij Ai+1,j, Ai,j+1 Aij .𝐴 (2) as stable feature points. Next, one or𝑅 more = orientations𝑎𝑡𝑎𝑛2(− 𝐴 − were𝐴, calculated −,𝐴, − 𝐴for) each. feature point ( (2)), where the image gradient magnitude ( 𝑀 ) and orientation ( 𝑅 ) were computed using pixel differences:

𝑀 = (𝐴 − 𝐴,) +(𝐴 − 𝐴,) , (1)

𝑅 = 𝑎𝑡𝑎𝑛2(𝐴 − 𝐴, ,𝐴, − 𝐴). (2)

Sensors 2019, 19, 4163 4 of 15

Step 2: Use the kd-tree model to calculate the Euclidean distance between two image feature points for feature point matching and finding an image pair with the required number of the feature points. Step 3: For each pair, estimate an F-matrix and refine matches by using the Random Sample Consensus (RANSAC) algorithm [21]. If feature points can be chained in matching pairs and detected, then a track can be formed. Step 4: Use the sparse bundle adjustment (SAB) to estimate the camera position and extract a low-density point cloud [22,23]. We delivered the feature points from RANSAC to SAB to estimate the camera position and extracted a low-density point cloud. Based on the projection function π, j j for each point pair, Ei is defined as the difference between the image reprojection exi of map point j in th j the i image, and its actual image measurement xi :

j j  j E = ex π Ri, Ti, x . (3) i i − i j Define Ri and ti as the camera pose of all frames and Xi as the corresponding 3D coordinates of each feature point, such that the bundle adjustment optimization model can be written as Equation (4). j j Here, for each feature point, the residual error can be represented by Ei , and Ωi is the information matrix of the feature point and is used for weight representation. During the bundle adjustment processing, the poses of all frames and the 3D information of the feature points are optimized, except for the original frames that are fixed to eliminate the gauge freedom.

  m N n jo X X j T j j F R , t = , X = (E ) Ω E . (4) { i i}i 1..m i j=1..N i i i i=1 j=1

Thus, the bundle adjustment optimization procedure means solving the following minimizations problem:   n jo R , t , X = argmin F(R, t, X). (5) i i i=1..m i = { } j 1..N j Ri,ti,Xi This can be solved by iterations of reweighted nonlinear least squares. One fundamental requirement to do this efficiently is to differentiate measurement errors and to obtain their Jacobians with respect to those parameters that need to be estimated. Step 5: Optimize the internal and external camera orientation parameters using ground control points (GCPs). Step 6: Calculate depth information for each camera and combine them into a single dense point cloud.

2.3. Orthoimage Mosaic Generation When the SfM workflow is finished, a dense point cloud can be obtained. At the same time, a digital surface model (DSM) in the form of raster image can be generated from the regular interpolation of the point cloud. With the DSM and optimized camera positions and orientation parameters, the multiple input photos are orthorectified to orthophotos. To mosaic the orthophotos, several strategies are practicable such as averaging the values of all pixels from individual photos, taking pixels from the photo observations closest to the normal direction or using a frequency domain approach [19]. More importantly, a notable feature of the VPARS is that the height is significantly lower than satellites. When the aircraft pass through the cloud, the photos captured in different positions have different blind areas blocked by clouds (Figure3a). Therefore, it is possible to generate cloud-free orthoimage by combining these multi-view photos. The principle is similar to the previous cloud-free generation methods using time-series satellite images with different cloud cover [24,25]. In this case, to remove cloud cover in the mosaic photo, we manually chose the images in the cloud cover area for generating Sensors 2019, 19, 4163 5 of 15 the orthomosaic image. Thus, the final orthomosaic can be free of cloud according to our selection Sensors(Figure 20193b)., 19, x FOR PEER REVIEW 5 of 15

(a) (b)

Figure 3. (a) A sketch map showing different viewing angles during flights, and (b) an example Figure 3. (a) A sketch map showing different viewing angles during flights, and (b) an example showing how multi-view images generate cloud-free images. showing how multi-view images generate cloud-free images. 3. Results 3. Results 3.1. Study Area 3.1. Study Area In June 2019, heavy rain and floods attacked southern China’s Guangdong province, and destroyed roadsIn andJune toppled 2019, heavy houses. rain According and floods to the attacked local publicity southern department, China's Guangdong 13 people wereprovince, killed and and destroyedtwo were roads still missing. and toppled More thanhouses. 280,000 According people to were the alocalffected publicity and over department, 4000 houses 13 collapsedpeople were [26]. killedThe rapid and two mapping were ofstill the missing. flood area More is extremelythan 280,000 important people forwere local affected authorities, and over who 4000 rely onhouses such collapsedinformation [26]. to The determine rapid mapping the number of the offlood quilts, area tents, is extremely folding important beds and otherfor local disaster authorities, relief who to be relysupplied on such to theinformation affected area. to determine the number of quilts, tents, folding beds and other disaster relief Guangdongto be supplied is theto the most affected populous area. province of China and the 15th-largest by area, a coastal province in SouthGuangdong China onis the northmost shorepopulous of the province South Chinaof China Sea. and It has the a 15th-largest humid subtropical by area, climate a coastal and provinceroutinely in su Southffers fromChina . on the north However, shore of its the economy is larger Sea. than It has that a ofhumid any other subtropical province climate in the andnation routinely and the suffers 6th largest from sub-nationaldroughts. However, economy its in ec theonomy world is inlarger 2018. than The that flood of events any other may province therefore incause the nation severe and damage the 6th to thislargest area. sub-national The VPARS ec methodonomy isin athe suitable world way in 2018. for flood The flood damage events mapping may thereforein the Guangdong cause severe Province. damage The to this developed area. The air VPARS traffic over method there is allows a suitable dense way observation for flood anddamage high mappingobservation in the frequency. Guangdong Moreover, Province. the The cloud-free developed map air generation traffic over capability there allows of VPARS dense by observation combining andmulti-view high observation photos can frequency. overcome Moreover, the frequent the cloud cloud-free cover problemmap generation in southern capability China. of VPARS by combiningIn this multi-view case, we processed photos can a overcome series of photos the frequent taken bycloud a passenger cover problem volunteer in southern on flight China. CZ6591, fromIn Shenzhen this case, towe Ningbo, processed on a June series 25, of 2019 photos (09:35 taken am, by local a passenger time). These volunteer photos on were flight taken CZ6591, over froma part Shenzhen of ,to Ningbo, from Taimeion June town 25, 2019 to Guanyinge (09:35 am, town local intime). These City, photos across were a length taken of over about a part16.42 of km. Dong Dong River, River from is theTaimei eastern town to Guanyi of thenge Pearl town River in Huizhou in the Guangdong City, across province, a length of southern about 16.42China. km. It Dong is one River of three is the main easter sourcesn tributary of floods of inthe the Pearl River in Delta,the Guangdong the other twoprovince, are Xi southern River and China. It is [one27]. of The three biggest main flood sources peaks of floods in Dong in the River Pearl mostly River occur Delta, from the Mayother to two June, are lastingXi River 7 toand 15 Beidays River [28]. [27]. In history, The biggest there flood were 14peaks major in Dong floods River in the mostly Dong Riveroccur Basin from fromMay 1864to June, to 1985. lasting The 7 to Dong 15 daysRiver [28]. and In its history, there arewere also 14 themajor source floods for in the the JuneDong 2019 River Guangdong Basin from flood.1864 to This 1985. observation The Dong Riverprovides and anits opportunitytributaries are to validatealso the thesource capability for the of June the VPARS2019 Guangdong to complement flood. traditional This observation satellite providesremote sensing an opportunity for rapid to flood validate damage the mapping.capability of the VPARS to complement traditional satellite remote sensing for rapid flood damage mapping. 3.2. Data Processing 3.2. Data Processing We downloaded the flight track log from the FlightAware website to obtain coarse GPS trajectory positionsWe downloaded of flight CZ6591 the flight every track 30 s. Basedlog from on thethe timestampFlightAware of eachwebsite image, to obtain we ran coarse a piecewise GPS trajectorylinear interpolation positions of on flight the discrete CZ6591 flight every position 30 s. Based data toon obtain the timestamp the camera of positioneach image, for each we image.ran a piecewise linear interpolation on the discrete flight position data to obtain the camera position for each image. Considering the error between reported flight position to interpolation, the measurement accuracy for the camera position was set to 100 m. Then we defined an average ground altitude of 0 m to avoid computation errors due to the very oblique shooting angle. The focal length was preset as 4 mm and sensor size as 2.65 mm × 1.99 mm

Sensors 2019, 19, 4163 6 of 15

Considering the error between reported flight position to interpolation, the measurement accuracy for the camera position was set to 100 m. SensorsThen 2019, we19, x defined FOR PEER an REVIEW average ground altitude of 0 m to avoid computation errors due to the6 veryof 15 oblique shooting angle. The focal length was preset as 4 mm and sensor size as 2.65 mm 1.99 mm as × readas read from from the the image image head head file capturedfile captured by the by camera.the camera. The cameraThe camera position position and extract and extract sparse sparse point cloudspoint clouds were furtherwere further estimated estimated by applying by applying sparse sparse bundle bundle adjustment. adjustment. A preliminary orthostatic image was derived basedbased on the sparse cloud points. To To locate the ground control control points points (GCPs), (GCPs), we we used used Google Google Eart Earthh images images as reference. as reference. The heights The heights of these of GCPs these GCPswere acquired were acquired from fromthe theAdvanced Advanced Spaceborne Spaceborne Thermal Thermal Emission Emission and and Reflection Reflection Radiometer (ASTER) globalglobal digital digital elevation elevation model model (GDEM). (GDEM). In thisIn this paper, paper, to be to consistent be consistent with thewith GDEM the GDEM height, theseheight, GCPs these are GCPs all selected are all on selected the ground. on Nonethe grou of themnd. wereNone on of surface them objectswere on like surface buildings. objects With like the GCPsbuildings. and preliminaryWith the GCPs camera and positions,preliminary we camera used the positions, SfM step we mentioned used the beforeSfM step to generatementioned the before dense cloudto generate point. the The dense cloud cloud point point. is filtered The firstlycloud bypoint the is Local filtered Outlier firstly Factor by the (LOF) Local method Outlier [29 Factor], and (LOF) some remainingmethod [29], noisy and pointssome remaining are further noisy removed points manually.are further Theremoved clouds manually above the. The ground clouds are above clearly the identifiedground are as clearly shown identified in Figure4 .as To shown remove in theFigure cloud 4. e Toffect remove on the the orthomosaic cloud effect image, on the we orthomosaic interactively choseimage, the we images interactively in which chose we could the images observe in the whic groundh we as could the source observe for orthomosaicthe ground as image the generationsource for inorthomosaic the cloud-cover image area. generation in the cloud-cover area.

Figure 4. TheThe derived derived point cloud shows that the clouds above the ground are clearly identified.identified. 3.3. Results 3.3. Results The final VPARS products include a dense point cloud and an orthomosaic image as shown The final VPARS products include a dense point cloud and an orthomosaic image as shown in in Figure5. We can see the cloud above the ground is clearly removed in the orthomosaic image, Figure 5. We can see the cloud above the ground is clearly removed in the orthomosaic image, significantly increasing the monitoring area compared to conventional satellite remote sensing. significantly increasing the monitoring area compared to conventional satellite remote sensing. Although the ground sampling resolution cannot compete with UAV images or very high-resolution Although the ground sampling resolution cannot compete with UAV images or very high-resolution satellite images, it is still sufficient to distinguish objects at the meter-level, such as roads, buildings satellite images, it is still sufficient to distinguish objects at the meter-level, such as roads, buildings and rivers, which is sufficient for flood mapping. and rivers, which is sufficient for flood mapping.

Sensors 20192019,, 1919,, 4163x FOR PEER REVIEW 7 of 15

(a) (b)

Figure 5. a b Figure 5. (a) Point cloud data, and (b) an orthomosaic image of the study area.area. Due to the development of imaging devices, the camera carried by a smartphone can now obtain Due to the development of imaging devices, the camera carried by a smartphone can now obtain a relatively high-resolution image for ground observation mission. In this study, the iPhone X dual a relatively high-resolution image for ground observation mission. In this study, the iPhone X dual back cameras used had a 4032 3024 image capturing resolution. Given the image data is from the back cameras used had a 4032 ×× 3024 image capturing resolution. Given the image data is from the volunteer company or individual, data which cannot be requested from a professional perspective, volunteer company or individual, data which cannot be requested from a professional perspective, we used the default setting of the camera during the flight. However, it is possible to avoid the impact we used the default setting of the camera during the flight. However, it is possible to avoid the impact of sunlight by previously selecting the opposite seat from the sunlight and taking a perpendicular of sunlight by previously selecting the opposite seat from the sunlight and taking a perpendicular shooting angle to the ground as much as possible without internal objects from the aircraft. In this shooting angle to the ground as much as possible without internal objects from the aircraft. In this case, we took 40 images as data and the exposure time was 1/750 s. The ground sampling resolution case, we took 40 images as data and the exposure time was 1/750 s. The ground sampling resolution (GSR) of orthomosaic image was 1.17 m/pix, calculated by (GSR) of orthomosaic image was 1.17 m/pix, calculated by 𝑓𝑙𝑦𝑖𝑛𝑔f lying distance 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒physical dimension × 𝑝ℎ𝑦𝑠𝑖𝑐𝑎𝑙 o f sensor 𝑑𝑖𝑚𝑒𝑛𝑠𝑖𝑜𝑛𝑜𝑓𝑠𝑒𝑛𝑠𝑜𝑟 GSR GSR = = × .. (6) (6) 𝑙𝑒𝑛𝑠lens 𝑓𝑜𝑐𝑎𝑙 f ocal length 𝑙𝑒𝑛𝑔𝑡ℎpixel dimension × 𝑝𝑖𝑥𝑒𝑙 o f image 𝑑𝑖𝑚𝑒𝑛𝑠𝑖𝑜𝑛 𝑜𝑓𝑖𝑚𝑎𝑔𝑒 × However, we should note that the image from the cameracamera has a shootingshooting angle and the flyingflying height cannotcannot representrepresent the the flying flying distance distance directly, directly, so so the the eff effectiveective resolution resolution should should be largerbe larger than than the calculatedthe calculated GSR. GSR. In this In case,this case, according according the quality the quality of the initialof the orthomosaicinitial orthomosaic image usingimage the using default the resolution,default resolution, we found we the found effective the resolutioneffective resolution should be should around be 5 maround/pix, so 5 wem/pix, reset so the we pixel reset size the as pixel 5 m forsize the as 5 final m for orthomosaic the final orthomosaic image exportation. image exportat We alsoion. estimated We also theestimated camera the position camera error position and GCPs error errorand GCPs in this error study. in Asthis shown study. in As Figure shown6, the in cameraFigure 6, position the camera error position of 40 images error reaches of 40 images 172.38 mreaches (this includes172.38 m an(this estimation includes erroran estimation of 95.96 m error for longitude,of 95.96 m123.21 for longitude, m for latitude 123.21 and m for 72.97 latitude m for and altitude). 72.97 Them for GCPs altitude). horizontal The GCPs error horizontal is estimated error to is be estimated 17.73 m. to Apart be 17.73 from m. theApart time from consumed the time inconsumed manual operations,in manual operations, the whole processing the wholetime processing of the algorithms time of the was algorithms only about was 5 min,onlymost about of 5 which min, most is spent of onwhich dense is pointspent cloudon dense generation point andcloud orthomosaic generation processing. and orthomosaic The low processing. time-cost implies The low the VPARStime-cost is capableimplies the for fastVPARS emergency is capable response for fast and emergenc large-scaley response area monitoring. and large-scale area monitoring.

Sensors 2019, 19, x FOR PEER REVIEW 8 of 15 Sensors 20192019,, 1919,, 4163x FOR PEER REVIEW 8 of 15

(a) (b) (a) (b)

FigureFigure 6.6. (a) Camera locations andand error estimates, andand ((b)) groundground controlcontrol pointspoints (GCP)(GCP) locationslocations andand Figure 6. (a) Camera locations and error estimates, and (b) ground control points (GCP) locations and errorerror estimates.estimates. ZZ errorerror isis representedrepresented byby ellipseellipse color.color. X,X, YY errorserrors areare representedrepresented by by ellipse ellipse shape. shape. error estimates. Z error is represented by ellipse color. X, Y errors are represented by ellipse shape. InIn thethe scenarioscenario ofof floodflood monitoring,monitoring, remoteremote sensingsensing technologytechnology cancan bebe usedused toto detectdetect thethe locationlocation In the scenario of flood monitoring, remote sensing technology can be used to detect the location andand scopescope ofof disasters,disasters, analyzeanalyze waterwater volumevolume andand depth,depth, andand assessassess thethe impactimpact ofof floodsfloods onon cultivatedcultivated and scope of disasters, analyze water volume and depth, and assess the impact of floods on cultivated land,land, residentialresidential areas,areas, townstowns andand roads.roads. In this case, the results of the VPARSVPARS imageimage cancan alsoalso bebe land, residential areas, towns and roads. In this case, the results of the VPARS image can also be appliedapplied toto identifyidentify andand determinedetermine thethe floodflood range.range. As shown in FigureFigure5 5b,b,the the flooded flooded area area can can be be applied to identify and determine the flood range. As shown in Figure 5b, the flooded area can be clearlyclearly visiblevisible in in the the VPARS VPARS orthomosaic orthomosaic image. image. To To have have a more a more precise precise result, result, we usedwe used a manual a manual way clearly visible in the VPARS orthomosaic image. To have a more precise result, we used a manual toway extract to extract the flooded the flooded area alongarea along the Dong the Dong river river in this in this study. study. The The damage damage map map was was then then overlaid overlaid on way to extract the flooded area along the Dong river in this study. The damage map was then overlaid Googleon Google Earth Earth image image as shownas shown in in Figure Figure7. We7. We can can clearly clearly observe observe many many cropland, cropland, roads, roads, buildings, buildings, on Google Earth image as shown in Figure 7. We can clearly observe many cropland, roads, buildings, andand grasslandgrassland werewere destroyeddestroyed inin thisthis flood.flood. and grassland were destroyed in this flood.

FigureFigure 7.7. FloodFlood mapmap overlaidoverlaid onon aa GoogleGoogle EarthEarth image.image. Figure 7. Flood map overlaid on a Google Earth image. AsAs thethe Google Google Earth Earth image image does does not not contain contain classification classification information information of ground of ground objects, objects, we further we As the Google Earth image does not contain classification information of ground objects, we overlaidfurther overlaid the flood the damage flood mapdamage to a map FROM-GLC to a FROM map-GLC for moremap detailedfor more analysis detailed of analysis flood detection of flood further overlaid the flood damage map to a FROM-GLC map for more detailed analysis of flood resultsdetection (Figure results8). The(Figure full name8). The of full FROM-GLC name of isFROM-GLC Finer Resolution is Finer Observation Resolution and Observation Monitoring and of detection results (Figure 8). The full name of FROM-GLC is Finer Resolution Observation and GlobalMonitoring Land of Cover, Global which Land is obtainedCover, which by Gong is obtained et al. with by aGong 10 m et resolution al. with [a30 10,31 m]. resolution The areas a[30,31].ffected Monitoring of Global Land Cover, which is obtained by Gong et al. with a 10 m resolution [30,31]. byThe flood areas and affected waterlogging by flood can and be waterlogging calculated by can a simple be calculated GIS spatial by analysisa simple step. GIS Wespatial found analysis the largest step. The areas affected by flood and waterlogging can be calculated by a simple GIS spatial analysis step. We found the largest area where the flood impacted was 0.965 km2 in the impervious surface We found the largest area where the flood impacted was 0.965 km2 in the impervious surface

Sensors 2019, 19, 4163 9 of 15 Sensors 2019, 19, x FOR PEER REVIEW 9 of 15

2 areasurrounding where the the flood river, impacted implying was significant 0.965 km ininfluenc the imperviouse of this surfaceflood to surrounding areas of human the river, activity. implying The significantsecond affected influence ground of this object flood was to the areas cropland, of human which activity. also Thesuffers second greatly affected from ground floods objectand a was0.74 2 thekm2 cropland, affected area which was also detected. suffers greatlyOther affected from floods ground and objects a 0.74 kmvarya fromffected 0.003 area to was 0.13 detected. km2, details Other of 2 awhichffected are ground shown objects in Table vary 1. from 0.003 to 0.13 km , details of which are shown in Table1.

Table 1.1. Statistics of floodedflooded area.

2 GroundGround Object object Area (km2)Area (km ) CroplandCropland 0.7377 0.7377 ForestForest 0.0613 0.0613 GrasslandGrassland 0.1334 0.1334 ShrublandShrubland 0.0032 0.0032 WetlandWetland 0.0350 0.0350 ImperviousImpervious surface surface 0.9650 0.9650 BarelandBareland 0.0843 0.0843

Figure 8. Flood map overlaid on the FROM-GLCFROM-GLC 2017 land cover map.

3.4. Validation with Sentinel-1 SAR Image 3.4. Validation with Sentinel-1 SAR Image To validate the flood mapping results, we used the conventional satellite remote sensing data To validate the flood mapping results, we used the conventional satellite remote sensing data for comparison. As the cloud coverage is high around 25 June 2019 over the study area, there is no for comparison. As the cloud coverage is high around 25 June 2019 over the study area, there is no cloud-free optical remote sensing data available from some typical satellites such as Planet, Sentinel-2 cloud-free optical remote sensing data available from some typical satellites such as Planet, Sentinel- and Landsat. However, SAR (Synthetic Aperture Radar) can penetrate cloud cover and is sensitive to 2 and Landsat. However, SAR (Synthetic Aperture Radar) can penetrate cloud cover and is sensitive water, which is widely applied in flood mapping. Here, we used a Sentinel-1A SAR L1 Ground Range to water, which is widely applied in flood mapping. Here, we used a Sentinel-1A SAR L1 Ground Detected product (10 m resolution in IW mode) to retrieve the flood damage map for comparison. Range Detected product (10 m resolution in IW mode) to retrieve the flood damage map for The SAR image was captured in an ascending orbit (No. 27835) with VV polarization on June 25, comparison. The SAR image was captured in an ascending orbit (No. 27835) with VV polarization on June 25, 2019 (18:25 am, local time), just 9 h after the VPARS observing time in this study. Some quantitative metrics were used to evaluate, defined as follows: 𝑡𝑝 Recall = , (7) 𝑡𝑝 + 𝑓𝑝

Sensors 2019, 19, 4163 10 of 15

2019 (18:25 am, local time), just 9 h after the VPARS observing time in this study. Some quantitative metrics were used to evaluate, defined as follows:

tp Recall = tp + f p , tp (7) Precison = tp + f n , 2 Precison Recall F score = × × , − Precison + Recall where tp, f p, tn and f n represent the true positive, false positive, true negative and false negative in the confusion matrix, respectively. Due to the low backscatter from the water body, the flooded area is clearly characterized by a dark color (Figure9a). We also drew the flooded area manually through visual interpretation. The visual interpretation is not time-consuming for this relatively small study area but can give a robust estimation. The flood map from the SAR image agrees well with our VPARS results (Figure9b). The quantitative assessment gives a high F-score of 0.9308. The recall and the precision are given as 0.9241 and 0.9377, respectively (Table2). The main discrepancies between two flooding maps are located in the two islands in the river (Figure 10). The VPARS image suggests a less flooded area than SAR. Two possible reasons may explain these discrepancies. First, the flooded area may expand in the 9 h between the VPARS and SAR capturing time. Second, the half-flooded bare close to the flooded area also has a very low backscatter value similar to the water body. Considering there is no significant flood expansion in other regions, it is more likely that the SAR image cannot separate half-flooded soil from theSensors water 2019 body., 19, x FOR PEER REVIEW 10 of 15

Table 2. Statistics of VPARS flood mapping𝑡𝑝 validation. Precison = , 𝑡𝑝 + 𝑓𝑛 Metrics2 × Precison Value × Recall F−score = , RecallPrecison 0.9241 + Recall

Precision 0.9377 where 𝑡𝑝 , 𝑓𝑝 , 𝑡𝑛 and 𝑓𝑛 represent the true positive, false positive, true negative and false F-score 0.9308 negative in the confusion matrix, respectively.

(a) (b)

Figure 9. (a) Sentinel-1A SAR image. (b) Flood map retrieved from SAR (red lines) and volunteered Figure 9. (a) Sentinel-1A SAR image. (b) Flood map retrieved from SAR (red lines) and volunteered passenger aircraft remote sensing (VPARS) images (green polygons). passenger aircraft remote sensing (VPARS) images (green polygons).

Table 2. Statistics of VPARS flood mapping validation.

Metrics Value

Recall 0.9241

Precision 0.9377

F-score 0.9308

Due to the low backscatter from the water body, the flooded area is clearly characterized by a dark color (Figure 9a). We also drew the flooded area manually through visual interpretation. The visual interpretation is not time-consuming for this relatively small study area but can give a robust estimation. The flood map from the SAR image agrees well with our VPARS results (Figure 9b). The quantitative assessment gives a high F-score of 0.9308. The recall and the precision are given as 0.9241 and 0.9377, respectively (Table 2). The main discrepancies between two flooding maps are located in the two islands in the river (Figure 10). The VPARS image suggests a less flooded area than SAR. Two possible reasons may explain these discrepancies. First, the flooded area may expand in the 9 h between the VPARS and SAR capturing time. Second, the half-flooded bare soil close to the flooded area also has a very low backscatter value similar to the water body. Considering there is no significant flood expansion in other regions, it is more likely that the SAR image cannot separate half- flooded soil from the water body.

SensorsSensors2019 2019, 19, 19, 4163, x FOR PEER REVIEW 1111 of of 15 15

(a) (b)

Figure 10. Typical discrepancies between flooded areas detected by (a) VPARS and (b) Sentinel-1 Figure 10. Typical discrepancies between flooded areas detected by VPARS and Sentinel-1 SAR SAR images. images. 4. Discussion 4. Discussion 4.1. Rapid Response to Emergency 4.1. Rapid Response to Emergency As flood is a natural disaster with short-lived and large-scale impacts, remote sensing should provideAs reliable flood is and a natural timely disaster damage with assessment. short-lived To evaluateand large-scale the response impacts, speed remote of satellite sensing remote should sensing,provide we reliable investigated and timely the data damage availability assessment. from Planet, To evaluate which the is the response world’s speed first company of satellite to revisitremote anysensing, location we in investigated a single day bythe usingdata aavailability fleet of Dove from satellites Planet, to which provide is globalthe world's observations first company for many to customersrevisit any around location the in world a single with day 3–5 by m using resolution. a fleet We of Dove set the satellites search area to provide the same global as the observations observing areafor ofmany our customers study, which around is located the world from with Taimei 3–5 townm reso tolution. Guanyinge We set townthe search in Huizhou area the City, same in as the the Guangdongobserving area province. of our Althoughstudy, which Planet is located can achieve from theTaimei shortest town revisit to Guanyinge period among town in all Huizhou commercial City, satellitein the plans,Guangdong our searching province. results Although in this regionPlanet show can theachieve Planet the data shortest were still revisit not availableperiod among for eight all dayscommercial in June, 2019.satellite In addition,plans, our Planet’s searching coverage results inin the this region region is show not stable, the Planet only 11data days were have still area not coverageavailable larger for eight than days 50% (Tablein June,3). 2019. Landsat In addition, and COSMO-SkyMed Planet's coverage 4 are in also the popular region datais not sources stable, foronly assessing11 days inundationhave area extentcoverage based larger on multispectral than 50% (Table and microwave 3). Landsat imaging. and COSMO-SkyMed Landsat 8 is an equipped4 are also operationalpopular data land sources imager, for whichassessing provides inundation 30 m exte spatialnt based resolution on multispectral multispectral and data, microwave and the imaging. revisit timeLandsat is 16 days.8 is Thean spatialequipped resolution operational of COSMO-SkyMed land imager, 4which is 5 m withprovides 5 days 30 revisit m spatial time. Theirresolution long revisitmultispectral periods makedata, and it di ffithecult revisit to respond time is to16 flooddays. rapidly.The spatial resolution of COSMO-SkyMed 4 is 5 m with 5 days revisit time. Their long revisit periods make it difficult to respond to flood rapidly. Table 3. Area coverage from Planet in June. Table 3. Area coverage from Planet in June. Date Area Coverage (%) JuneDate 1–June 10 5 6 0 35Area 20 coverage 26 57 (%) 77 79 8 JuneJune 1–June 11–Jun 10 20 05 0 6 0 0 3735 020 6326 64 57 40 5077 80 79 8 June 21–Jun 30 60 66 0 0 64 0 63 39 60 60 June 11–Jun 20 0 0 0 37 0 63 64 40 50 80 June 21–Jun 30 60 66 0 0 64 0 63 39 60 60 However, as shown in the statistics of Guangdong airport routes from Variflight (Table4), we found that revisiting time and coverage of VPARS is good enough to obtain much more sufficient data than satellite.However, Because as of theshown high in revisiting the statistics rate and of Guangdong more flights, airport it is easier routes to monitor from Variflight the regions (Table within 4), the we flightfound route that based revisiting on the time VPARS and methodcoverage [18 of]. Currently,VPARS is thegood satellite enough can to hardly obtain repeat much high-frequency more sufficient surveillancedata than satellite. of an area Because in a few of days the high (or one revisiting day), and rate cannot and more provide flights, the spatio-temporal it is easier to monitor coverage the requiredregions forwithin flood the assessment. flight route As based flood on disaster the VPAR changeS method continuously [18]. Currently, in time, disasterthe satellite management can hardly requiresrepeat high-frequency up-to-date and accuratesurveillance flood of damage an area information.in a few days Due (or to one the day), large and number cannot of flightsprovide and the wide-coveragespatio-temporal flight coverage lines (Figure required 11), for the flood VPARS assessment. therefore hasAs flood the capability disaster tochange produce continuously timely and in accuratetime, disaster flood assessment, management which requires is critically up-to-date important and accurate to prioritize flood relief damage efforts information. and plan mitigation Due to the measureslarge number against of damage. flights and wide-coverage flight lines (Figure 11), the VPARS therefore has the capability to produce timely and accurate flood assessment, which is critically important to prioritize relief efforts and plan mitigation measures against damage.

Sensors 2019, 19, x FOR PEER REVIEW 12 of 15

Sensors 2019, 19, x FOR PEER REVIEW 12 of 15 Sensors 2019, 19, 4163 12 of 15

Table 4. Statistics of Guangdong Airport Routes on June 25, 2019. Table 4. Statistics of Guangdong Airport Routes on June 25, 2019. TableCAN 4. Statistics MXZ of FUOGuangdong SWA Airport LDG Routes SZX on JuneHUZ 25, 2019.ZUH ZHA HSC Connection cities CANCAN179 MXZ MXZ6 FUO FUO 9 SWA SWA 45 LDG LDG 0 SZX SZX138 HUZ HUZ31 ZUH ZUH56 ZHA ZHA34 HSC HSC 0 ConnectionConnectionFlight routes cities 179422 179 6 9 6 24 9 9 106 45 45 0 138377 138 3165 31 16356 56 3473 0 FlightFlightFlights routes routes 1364422 422 13 9 9 24 24 106167 106 0 1036377 377 6578 65 163399 163 7396 0 FlightsFlights 1364 1364 13 13 24 24 167 167 0 1036 1036 78 78 399 399 96 0

Figure 11. Flight lines in China on June 25, 2019 (https://www.flightaware.com). FigureFigure 11. FlightFlight lines lines in in China China on on June June 25, 25, 2019 2019 (https://www.flightaware.com). (https://www.flightaware.com). 4.2. Cloud-Free Image Generation 4.2. Cloud-Free Image Generation 4.2. Cloud-FreeClouds are Image an important Generation factor preventing remote sensing applications. Normally, clouds are distributedClouds at are an analtitude important of about factor 2000–6000 preventing m from remote the ground. sensingsensing applications.Andapplications. the satellites Normally, in remote clouds sensing are distributedare usually atat anan altitudealtitude ofof about 400 km 2000–6000 above the m ground, from the which ground. makes And it the easy satellites for them in remoteto miss sensing ground areimages usually because at an of altitude cloud cover. of 400 We km took above the the remote ground, sensing which data makes from it Planet easy for as an them example. to miss They ground are imagescaptured because by the ofDove cloud satellites cover. We with took orbital the remoteheights sensingof 420–475 data km. from Unfortunately, Planet as an example.the Dove They satellites are captureddid not catch by the the Dove images satellites covering with orbital the experiment heights of area 420–475 on June km. Unfortunately,25. Unfortunately, Then we searched the Dove the satellites Planet didimages notnot catchfromcatch the thetheimages surroundingimages covering covering days the the experimentin Juneexperiment as comparison area area on June on June(Figure 25. Then 25. 12).Then we At searched we this searched altitude, the Planet the the imagesPlanet Dove imagesfromsatellites the from surroundingcannot the see surrounding the days ground in June days data as comparisondirectlyin June throas (Figurecomparisonugh the 12 clouds). At(Figure this when altitude, 12). it takesAt the this images Dove altitude, satellites of the the ground. cannot Dove satellitesseeAs shown the ground cannot in the data see searching the directly ground results through data (Figure directly the clouds 12), thro all whenugh these the it images takes clouds images are when covered of it the takes ground.by images clouds, As of some shown the ground.of inthem the Assearchingare showneven incomplete. resultsin the (Figuresearching It 12is ),thereforeresults all these (Figure imagesvery 12),challenging are all covered these forimages by satellite clouds, are somecoveredremote of themsensingby clouds, are to even someovercome incomplete. of them the areItcloud is even therefore cover incomplete. limitation. very challenging It is therefore for satellite very challenging remote sensing for tosatellite overcome remote the cloudsensing cover to overcome limitation. the cloud cover limitation.

Figure 12. ImageImage availability availability searched searched from Plan Planetet website (https://www.planet.com/). (https://www.planet.com/). In theFigure VPARS, 12. sinceImage the availability flight altitude searched of from the Plan aircraftet website is generally (https://www.planet.com/). between 6000 and 12,000 m, the relatively lower observation height makes it easier for aircraft camera to break through the cloud

Sensors 2019, 19, x FOR PEER REVIEW 13 of 15

SensorsIn2019 the, 19 VPARS,, 4163 since the flight altitude of the aircraft is generally between 6000 and 12,000 m,13 of the 15 relatively lower observation height makes it easier for aircraft camera to break through the cloud interference and obtain the ground data as shown in FigureFigure 1313.. However, it cannotcannot eliminateeliminate cloud cover entirely in a single photo. So, So, in in our our study, study, we used a stacking approach to combine cloud-free part in several images from didifferentfferent anglesangles (Figure(Figure3 3b).b). TheThe cloudcloud influenceinfluence cancan thereforetherefore bebe easilyeasily removed in the VPARS.

Figure 13. AA sketch sketch map map showing showing cloud impact difference difference on satellite and aircraft. 4.3. Some Limitations 4.3. Some limitations Although the VPARS has the advantages of rapid response and cloud removal for flood disaster Although the VPARS has the advantages of rapid response and cloud removal for flood disaster management, there are still several limitations to this method. The first shortage we find is image quality. management, there are still several limitations to this method. The first shortage we find is image VPARS data is captured by customer-level cameras from passenger volunteers, and the vibration quality. VPARS data is captured by customer-level cameras from passenger volunteers, and the during the flight and shooting the photo through the thick glass from aircraft cabin windows may vibration during the flight and shooting the photo through the thick glass from aircraft cabin produce images with low quality, which may aggravate image processing. Conversely, professional windows may produce images with low quality, which may aggravate image processing. optical RS cameras for satellite remote sensing can provide a series of stable, high-precision and Conversely, professional optical RS cameras for satellite remote sensing can provide a series of stable, easy-to-handle images. Another drawback is the geometric accuracy. As there is no high-precision high-precision and easy-to-handle images. Another drawback is the geometric accuracy. As there is position and orientation system in customer-level cameras, the geometric error is relatively larger than no high-precision position and orientation system in customer-level cameras, the geometric error is high-resolution optical satellite remote sensing products. The detailed comparison between VPARS relatively larger than high-resolution optical satellite remote sensing products. The detailed and satellite RS are listed in Table5. comparison between VPARS and satellite RS are listed in Table 5. Table 5. Comparison between passenger aircraft remote sensing (RS) and satellite optical RS. Table 5. Comparison between passenger aircraft remote sensing (RS) and satellite optical RS. Rapid Less Limited High Image High Geometric RapidResponse Less limitedby Cloud by HighQuality image HighAccuracy geometric

Passenger aircraft RSresponse Yescloud Yesquality No accuracy No PassengerSatellite aircraft optical RSYes NoYes No No Yes Yes No RS 5. ConclusionsSatellite optical No No Yes Yes In summary,RS we have proposed a novel remote sensing method using a passenger aircraft platform for flood observation. Currently, flood remote sensing is mainly limited by satellite orbit revisit time and5. Conclusions cloud cover, resulting in delayed, partial coverage and incomplete flood assessments. However, a timelyIn summary, and accurate we have flood proposed assessment a novel is critically remote important sensing method for flood using mitigation a passenger and response. aircraft Theplatform proposed for flood VPARS observation. method has Currently, advantages, flood such remo aste a high-revisitsensing is mainly period limited because by of satellite the frequent orbit passengerrevisit time aircraft and cloud flights, cover, and resulting the relatively in delayed, low flight partial altitude coverage of passenger and incomplete aircrafts flood makes assessments. it possible toHowever, observe a ground timely objectsand accurate underneath flood clouds.assessment The cloud-coveris critically important influence canfor thereforeflood mitigation be removed and byresponse. combining The proposed multi-view VPARS images method taken inhas the advantages passenger, such aircraft. as a Thishigh-revisit case study period has because validated of the capabilityfrequent passenger of the VPARS aircraft in flights, flood-damage and the relative mappingly andlow flight cloud-free altitude orthomosaic of passenger image aircrafts generation. makes However, VPARS also faces challenges in data quality and geometric accuracy. In future work, efforts

Sensors 2019, 19, 4163 14 of 15 can be made in fusing multi-source high-quality satellite images and developing information retrieval methods [32] to improve the VPARS data quality.

Author Contributions: Conceptualization, C.W. and Q.L.; methodology, C.W.; writing—original draft preparation, J.K.; writing—review and editing, C.W. and J.K.; supervision, Q.L.; project administration, W.X. and K.Y.; funding acquisition, W.X. and K.Y. Funding: This work was partially funded by the Shenzhen Scientific Research and Development Funding Program (KQJSCX20180328093453763, JCYJ20180305125101282, JCYJ20170412142144518), the National Natural Science Foundation of China (No. 61802265), the Open Fund of the State Key Laboratory of Earthquake Dynamics (LED2016B03), and the Open Fund of Key Laboratory of Urban Land Resources Monitoring and Simulation, Ministry of Land and Resource (KF-2018-03-004). Conflicts of Interest: The authors declare no conflict of interest.

References

1. Kuroda, T.; Koizumi, S.; Orii, T. A Plan for a Global Disaster Observation Satellite System (GDOS); Springer: Dordrecht, The Netherlands, 1997. 2. Wang, J.; Sammis, T.W.; Gutschick, V.P. A remote sensing model estimating lake evaporation. In Proceedings of the IEEE International Geoscience & Remote Sensing Symposium, Boston, MA, USA, 8–11 July 2008. 3. Wang, C.; Li, Q.; Liu, Y.; Wu, G.; Liu, P.; Ding, X. A comparison of waveform processing algorithms for single-wavelength LiDAR bathymetry. ISPRS J. Photogramm. Remote Sens. 2015, 101, 22–35. [CrossRef] 4. Wang, C.; Ding, X.; Shan, X.; Zhang, L.; Jiang, M. Slip distribution of the 2011 Tohoku earthquake derived from joint inversion of GPS, InSAR and seafloor GPS/acoustic measurements. J. Asian Earth Sci. 2012, 57, 128–136. [CrossRef] 5. Tralli, D.M.; Blom, R.G.; Zlotnicki, V.; Donnellan, A.; Evans, D.L. Satellite remote sensing of earthquake, volcano, flood, landslide and coastal inundation hazards. ISPRS J. Photogramm. Remote Sens. 2005, 59, 185–198. [CrossRef] 6. Jaiswal, R.K.; Mukherjee, S.; Raju, K.D.; Saxena, R. Forest fire risk zone mapping from satellite imagery and GIS. Int. J. Appl. Earth Obs. Geoinf. 2003, 4, 1–10. [CrossRef] 7. Lee, S. Application of logistic regression model and its validation for landslide susceptibility mapping using GIS and remote sensing data. Int. J. Remote Sens. 2005, 26, 1477–1491. [CrossRef] 8. Boccardo, P.; Tonolo, F.G. Remote sensing role in emergency mapping for disaster response. In Engineering Geology for Society and Territory-Volume 5; Springer: Cham, Switzerland, 2015; pp. 17–24. 9. Voigt, S.; Giulio-Tonolo, F.; Lyons, J.; Kuˇcera,J.; Jones, B.; Schneiderhan, T.; Platzeck, G.; Kaku, K.; Hazarika, M.K.; Czaran, L. Global trends in satellite-based emergency mapping. Science 2016, 353, 247–252. [CrossRef][PubMed] 10. Whitehead, K.; Hugenholtz, C.H. Remote sensing of the environment with small unmanned aircraft systems (UASs), part 1: A review of progress and challenges. J. Unmanned Veh. Syst. 2014, 2, 69–85. [CrossRef] 11. Boshuizen, C.; Mason, J.; Klupar, P.; Spanhake, S. Results from the planet labs flock constellation. In Proceedings of the AIAA/USU Conference on Small Satellites, Logan, UT, USA, 4–7 August 2014. 12. FlightAware. 2018. Available online: https://flightaware.com/ (accessed on 19 July 2019). 13. Rosser, J.F.; Leibovici, D.G.; Jackson, M.J. Rapid flood inundation mapping using social media, remote sensing and topographic data. Nat. Hazards 2017, 87, 103–120. [CrossRef] 14. Schnebele, E.; Cervone, G. Improving remote sensing flood assessment using volunteered geographical data. Nat. Hazards Earth Syst. Sci. 2013, 13, 669–677. [CrossRef] 15. Poser, K.; Dransch, D. Volunteered geographic information for disaster management with application to rapid flood damage estimation. Geomatica 2010, 64, 89–98. 16. Snavely, N.; Seitz, S.M.; Szeliski, R. Modeling the World from Internet Photo Collections. Int. J. Comput. Vis. 2008, 80, 189–210. [CrossRef] 17. Westoby, M.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. Structure-from-Motion photogrammetry: A novel, low-cost tool for geomorphological applications. Geomorphology 2012, 179, 300–314. [CrossRef] 18. Abrams, M.; Bailey, B.; Tsu, H.; Hato, M. The ASTER Global DEM. Photogramm. Eng. Remote Sens. 2010, 76, 344–348. Sensors 2019, 19, 4163 15 of 15

19. Agisoft. User Manuals Agisoft PhotoScan Professional Edition; Agisoft LLC: St. Petersburg, Russia, 2018. 20. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157. 21. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Read. Comput. Vis. 1987, 24, 726–740. 22. Lourakis, M.I.A.; Argyros, A.A. SBA: A software package for generic sparse bundle adjustment. ACM Trans. Math. Softw. 2009, 36, 2. [CrossRef] 23. Tang, S.; Chen, W.; Wang, W.; Li, X.; Darwish, W.; Li, W.; Huang, Z.; Hu, H.; Guo, R. Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking. Sensors 2018, 18, 1385. [CrossRef][PubMed] 24. Helmer, E.H.; Ruefenacht, B. Cloud-Free Satellite Image Mosaics with Regression Trees and Histogram Matching. Photogramm. Eng. Remote Sens. 2005, 71, 1079–1089. [CrossRef] 25. Min, L.I.; Liew, S.C.; Kwoh, L.K. Automated production of cloud-free and cloud shadow-free image mosaics from cloudy satellite imagery. In Proceedings of the XXth ISPRS Congress, Toulouse, France, 21–25 July 2003; pp. 12–13. 26. CHINADAILY. Heavy Rain Leaves 13 Dead, 2 Missing in Guangdong. Available online: http://www. chinadaily.com.cn/a/201906/13/WS5d0231a7a3103dbf143280a0.html (accessed on 13 June 2019). 27. Zhang, Q.; Zhang, W.; Chen, Y.D.; Jiang, T. Flood, and typhoon disasters during the last half-century in the Guangdong province, China. Nat. Hazards 2011, 57, 267–278. [CrossRef] 28. Zhen, G.; Li, Y.; Tong, Y.; Yang, L.; Zhu, Y.; Zhang, W. Temporal variation and regional transfer of heavy metals in the Pearl (Zhujiang) River, China. Environ. Sci. Pollut. Res. Int. 2016, 23, 8410–8420. [CrossRef] 29. Wang, C.; Shu, Q.; Wang, X.; Guo, B.; Liu, P.; Li, Q. A random forest classifier based on pixel comparison features for urban LiDAR data. ISPRS J. Photogramm. Remote Sens. 2019, 148, 75–86. [CrossRef] 30. Gong, P.; Liu, H.; Zhang, M.; Li, C.; Wang, J.; Huang, H.; Clinton, N.; Ji, L.; Li, W.; Bai, Y.; et al. Stable classification with limited sample: Transferring a 30-m resolution sample set collected in 2015 to mapping 10-m resolution global land cover in 2017. Sci. Bull. 2019, 64, 370–373. [CrossRef] 31. Gong, P.; Li, X.; Zhang, W. 40-Year (1978–2017) human settlement changes in China reflected by impervious surfaces from satellite remote sensing. Sci. Bull. 2019, 64, 756–763. [CrossRef] 32. Li, Y.; Zhang, Y.; Xin, H.; Hu, Z.; Ma, J. Large-Scale Remote Sensing Image Retrieval by Deep Hashing Neural Networks. IEEE Trans. Geosci. Remote Sens. 2017, PP, 1–16. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).