sensors

Article Integrate Weather Radar and Monitoring Devices for Urban Flooding Surveillance

Shih-Yen Hsu 1,2, Tai-Been Chen 2, Wei-Chang Du 1, Jyh-Horng Wu 3 and Shih-Chieh Chen 4,* 1 Department of Information Engineering, I-Shou University, No.1, Sec. 1, Syuecheng Rd., Dashu , 84001, ; [email protected] (S.-Y.H.); [email protected] (W.-C.D.) 2 Department of Medical Imaging and Radiological Science, I-Shou University, No.8, Yida Rd., Jiaosu Village Yanchao District, Kaohsiung 82445, Taiwan; [email protected] 3 National Center for High-Performance Computing, No. 7, R&D 6th Rd., Science Park, Hsinchu 30076, Taiwan; [email protected] 4 Department of Civil and Ecological Engineering, I-Shou University, No.1, Sec. 1, Syuecheng Rd., , Kaohsiung 84001, Taiwan * Correspondence: [email protected]; Tel.: +886-6157711 (ext. 3325)

 Received: 21 December 2018; Accepted: 13 February 2019; Published: 17 February 2019 

Abstract: With the increase of extreme weather events, the frequency and severity of urban flood events in the world are increasing drastically. Therefore, this study develops ARMT (automatic combined ground weather radar and CCTV (Closed Circuit Television System) images for real-time flood monitoring), which integrates real-time ground radar echo images and automatically estimates a rainfall hotspot according to the cloud intensity. Furthermore, ARMT combines CCTV image capturing, analysis, and Fourier processing, identification, water level estimation, and data transmission to provide real-time warning information. Furthermore, the hydrograph data can serve as references for relevant disaster prevention, and response personnel may take advantage of them and make judgements based on them. The ARMT was tested through historical data input, which showed its reliability to be between 83% to 92%. In addition, when applied to real-time monitoring and analysis (e.g., typhoon), it had a reliability of 79% to 93%. With the technology providing information about both images and quantified water levels in flood monitoring, decision makers can quickly better understand the on-site situation so as to make an evacuation decision before the flood disaster occurs as well as discuss appropriate mitigation measures after the disaster to reduce the adverse effects that flooding poses on urban areas.

Keywords: ARMT; CCTV; ground weather radar image

1. Introduction

1.1. The Urban Flood Disaster As the climate has been changing, global rainfall patterns have been constantly changing. According to the Global Climate Observing System (GCOS), rainfall in many cities in the world is getting more severe and is changing drastically [1]. Taiwan is located in a region with frequent natural disasters, where short-duration heavy rainfall often causes flooding in urban regions due to the fact that excessive rainfall cannot be drained within a short period of time in the regions, which usually causes considerable economic losses. The increasing incidence and severity of floodings poses a threat to most cities around the world [2,3]. When floods threatening people’s lives occur in low-lying areas, the needed manpower and physical resources that have to be mobilized for evacuation and withdrawal make disaster relief operations more difficult. These days, early warning systems are widely used in flood disaster and water resources monitoring and forecast, which collect fixed-point

Sensors 2019, 19, 825; doi:10.3390/s19040825 www.mdpi.com/journal/sensors Sensors 2019, 19, 825 2 of 15 and fixed-time information through monitoring hardware devices, such as sensors or water level gauges. Yet, they lack image data to prove whether the gauged target is the actual water level or another object. The forecasts from existing early warning systems are given on the basis of the risk models established through historical data or satellite cloud images; however, the low specificity and the inability to achieve localized regional analysis are the present disadvantages [4]. For these reasons, this research aims to develop technology using images of ground weather radar to automatically determine the position of the cloud cover and in turn predict possible rainfall areas. Additionally, the connection to the distant monitor for real-time remote image data within a specific range of the rainfall area automatically enables for CCTV (Closed Circuit Television System) images to be interfaced and analyzed. Furthermore, flood identification and the flood depth calculation can be carried out with the image analysis technology and processing method, which can serve for urban intelligent monitoring and early warning and provide the disaster relief unit with the analytic results for reference. Hence, ARMT (automatic combined ground weather radar and CCTV images for real-time flood monitoring) was developed in this study and the related algorithms are described below.

1.2. Establishment of Automated Urban Flood Monitoring System In Taiwan, there are more than 40,000 surveillance cameras mounted in cities, at intersections, on river embankments, etc. This study is intended to take advantage of the surveillance camera images to create a technology that identifies flooding automatically. To attain all-weather automated identification, this system applies weather radars as the criterion for the identification activation. The radar echoes are used to observe the amount, the structure distribution, the intensity, and the movement of cloud cover through the strength of echo signals. A higher echo intensity refers to thicker clouds and more abundant rainfall and a lower echo intensity to thinner clouds and less rainfall. The region which has thicker clouds tends to be the region bearing the most precipitation. Radar echoes are mainly aimed at detecting water droplets in the air. The detecting process is carried out with a radar antenna, transmitter, receiver, and radome by emitting electromagnetic wave signals across a specific elevation angle range. When the electromagnetic wave meets the water droplets or objects in the air, the reflected signal is generated and then received by the receiver. When the target moves closer to the transmitter, the reflected wave becomes denser; when the target moves away (from the transmitter), the reflected wave becomes sparser. The overall complete radar echo information is obtained through signal processing and analysis. Due to the higher sensitivity of radar echo to water droplet signal detection, it is often used by scholars for future rainfall prediction [5,6]. Based on our investigation, we know that so far there are four radar echo stations in Taiwan, the overall signals of which can fully cross all over Taiwan. The weather radar shows the current weather condition and is used to make predictions about the future weather after the radar detects the structural distribution, intensity, and movement of precipitation particles (rain, snow, hail) in the atmosphere. The commonly used unit in Doppler radar is dBZ, which measures the strength of precipitation. Z stands for the reflectivity factor, and the measured unit is mm6·m−3. dB means deciBel, and the value of dBZ can be calculated through the following equation: dBZ = 10log(Z) [7]. The design of this integrated ground weather radar draws on the weather radar images which come from the Central Weather Bureau in Taiwan. Weather radar is defined as the signal reflected from the precipitation particles (rain, snow, hail, etc.) in the atmosphere after the radar electromagnetic wave is sent out, and it is called radar reflectivity. The received reflected signals from the precipitation particles are displayed in different colors in accordance with their intensity to create the weather radar image. The intensity of the reflected waves depends on the size, shape, and state of the precipitation particles and the number of particles per unit volume. Generally, a stronger reflected signal signifies a more powerful intensity of precipitation. As a result, weather radar can provide information to interpret the precipitation intensity and structure distribution of the weather [6,8]. Sensors 2019, 19, 825 3 of 15

This study aims to integrate radar echo technology into the automated activation of a monitoring image water level/water depth analysis system. When it is predicted that there will likely be precipitation in the region via the weather radar, the system will automatically enable the activation of the monitoring image analytic module within the region. Furthermore, it switches the monitoring regions immediately according to the condition change of the clouds and precipitation. In this way, the technology achieves a more humanized monitoring way, which improves the monitoring system. Before carrying out monitoring and processing analysis, we first integrated the data from all CCTV image stations, established a database of image parameters, and had all relevant parameters set, including latitude and longitude of the surveillance stations, image scales, and warning thresholds. After that, we defined each identification region under its appropriate surveillance station. Following the completion of the above work, the system can now carry out monitoring and processing automatically each time there is a precipitation event. If the identification meets the warning threshold condition, the system immediately sends out the analytic result to notify the relevant disaster prevention personnel. In the process of monitoring and processing analysis, the weather radar firstly identifies the rainfall hotspot and calculates the corresponding longitude and latitude coordinates. Then the surveillance cameras within a range near the coordinates are located, and at the same time, the image identification and analysis system is activated for monitoring. Once the system is activated, it continuously captures surveillance camera images every 10 min, which are processed by the algorithm developed in this study to estimate flood depth. To do the estimation, first, proper images are selected followed by image intensification. Finally, the water level/water depth is estimated with respect to the image. During the process of continuous calculation, the error is corrected if the difference is too large in the former and latter estimation of the water level/water depth. And if the interpretation is that there is a flood event—reaching the warning water level—the system immediately sends out an analytic result warning notice and continues the monitoring until the rainfall event ends, as shown in Figure1. SensorsSensors 20192019, 19, 19 FOR, 825 PEER REVIEW 4 of4 15

Figure 1. A flow chart of the automated urban flood monitoring system.

2. Intelligent Urban Flood Surveillance Figure 1. A flow chart of the automated urban flood monitoring system. 2.1. Cloud Thickness Estimation with Respect to Weather Radar 2. Intelligent Urban Flood Surveillance After the weather radar is acquired, the region range is defined with respect to the latitude and 2.1.longitude Cloud Thickness of Taiwan, Estimation which with ranges Respect in latitude to Weather from Radar 21 to 26 degrees north and in longitude from 119 to 123 degrees east. The image part within the range in the map is subjected to quantification After the weather radar is acquired, the region range is defined with respect to the latitude and so that all pixel intensity values in the range of the region are obtained. Then each pixel intensity longitude of Taiwan, which ranges in latitude from 21 to 26 degrees north and in longitude from 119 value is examined through operations to determine if it is larger than the set threshold value—i.e., to 123 degrees east. The image part within the range in the map is subjected to quantification so that the reflectivity threshold value is set to 15 dBZ. It represents the green color in the image and is all pixel intensity values in the range of the region are obtained. Then each pixel intensity value is examined through operations to determine if it is larger than the set threshold value—i.e., the

Sensors 2019, 19, 825 5 of 15 considered possible rainfall. Then the centroid position is estimated, which represents the cloud cover position in the range of the region, signifying a region with a thicker cloud cover—a region with a higher probability of rainfall. The surveillance cameras within a certain range (such as 5 km, 10 km, 50 km, etc.) around the centroid position are then interfaced, and the surveillance images are gained automatically, followed by the identification and analysis of the water level in the images of the region. Each activation cycle is preset to 24 h. Provided there is no cloud cover for 2 h, it stops automatically until the next appearance of cloud cover is detected. (I) An Algorithm for Automated Monitoring Activation with Respect to Ground Weather Radar

Step 1: Link the Central Weather Bureau website to acquire terrain-free weather radar. Step 2: Process images, including grayscale conversion processing, echo map matrix scaling to overlap the Taiwan administrative region map, and conversion of pixel position into latitude and longitude. Step 3: Estimate the cloud cover appearing on the weather radar within the scope between latitude ranging from 21 to 26 degrees north and longitude ranging from 119 to 123 degrees east: cloud thickness estimation, centroid calculation, comparison of CCTV images within 5, 10, and 50 km radius from the centroid, which is achieved by searching for CCTV locations through the database to obtain the images. Step 4: Monitor unceasingly through CCTV cameras for 24 h. If waterlogging or flooding occurs, a notice is issued. Step 5: The stop mechanism: the monitoring stops when there is no cloud cover detected for 2 consecutive hours within the field of latitude ranging from 21 to 26 degrees north and longitude ranging from 119 to 123 degrees east.

To run the algorithm of centroid analysis with respect to ground radar echoes, we need to begin by defining and dividing the administrative regions. In the process of quantifying and calculating the thunderstorm cell centroid, firstly, the image of the region is converted to a grayscale image, and the gray level values are referred to as the intensity values of the pixels included in the region. After the intensity values are gained, the centroid position can be calculated. The following is the centroid calculation equation. ! n n n  n  ∑ x · ∑ mxy ∑ y · ∑ mxy x=1 y=1 y=1 x=1 Cx = n n ,Cy = n n . (1) ∑ ∑ mxy ∑ ∑ mxy x=1 y=1 x=1 y=1 where Cx represents the position of the centroid along the X-axis in the coordinate plane. Cy represents the position of the centroid along the Y-axis. n denotes the maximum representative position of the operation points along the X- and Y-axis in the coordinate plane, and n is an integer. x is the coordinate value on the X-axis in the coordinate plane, and y is the coordinate value on the Y-axis. The centroid position can be determined through the calculation (Figure2). The centroid position calculated by the system is shown in the black box in the figure. mxy represents the intensity value of the operation point located at (x,y) coordinates. Once the centroid position is determined, the coordinate values of the coordinate plane representing the cloud cover position are converted into latitude and longitude coordinates and then provided for the monitoring system to automatically start the subsequent work. Sensors 2019, 19 FOR PEER REVIEW 6

Sensors 2019, 19, 825 6 of 15 Sensors 2019, 19 FOR PEER REVIEW 6

Figure 2. The calculation results of centroid based on the radar echo image. Left panel presents the input image. The position of the black spot in right panel denotes the Figureestimated 2.2.The Thecentroid calculation calculation position. results results of centroid of centroid based onbased the radaron the echo radar image. echo Left image. panel presentsLeft panel the presentsinput image. the The input position image. of the The black position spot in right of the panel black denotes spot the in estimated right panel centroid denotes position. the 2.2. Imageestimated Water centroid Level Id entificationposition. Technology 2.2. Image Water Level Identification Technology As for the image identification, the CCTV camera images are initially procured, followed by 2.2.selecting ImageAs for theWater the identifying imageLevel Id identification,entification area of an Technology obvious the CCTV target camera object images(as shown are in initially the box procured, of Figure followed3) for water by selecting the identifying area of an obvious target object (as shown in the box of Figure3) for water level/waterAs for thedepth image determination identification, and dissection.the CCTV Aftercamera the images target objectare initially is selected, procured, we use followed four image by level/water depth determination and dissection. After the target object is selected, we use four selectingfeatures—entropy, the identifying edge areavalue, of discrete an obvious cosine target transform object (as(DCT), shown and in signal-to-noise the box of Figure ratio 3) (SNR)—to for water image features—entropy, edge value, discrete cosine transform (DCT), and signal-to-noise ratio level/waterconduct the depth image determination classification and and dissection. selection Afterand getthe targetthe estimated object is selected,data of water we use level, four imagewhich (SNR)—to conduct the image classification and selection and get the estimated data of water level, features—entropy,is subsequently converted edge value, into discrete real water cosine depth transform through (DCT), the andpixel signal-to-noise scale and is ratiodocumented (SNR)—to in which is subsequently converted into real water depth through the pixel scale and is documented in conductthe database. the image classification and selection and get the estimated data of water level, which the database. is subsequently converted into real water depth through the pixel scale and is documented in the database.

Figure 3. The target object (ROI) plot in the image.

(II) An Algorithm forFigure Water 3. Level The Identificationtarget object (ROI) with plot Respect in the to image. Monitoring Image

Step 1: (II)Select An appropriate AlgorithmFigure for images Water 3. The according Level target Identification object to the image(ROI) plotwith features. in Respect the image. to Monitoring Image Step 2: Select images with obvious target objects (e.g., a white wall, a column). Step 3:StepDesign 1: Select the appropriate virtual water images gauge—also according called to digital the image water features. gauge, image water depth identification (II) An Algorithm for Water Level Identification with Respect to Monitoring Image Stepregion, 2: Select or images region ofwith interest obvious (ROI)—and target objects obtain (e.g., the a scale white regarding wall, a column). the image to estimate the StepStepflood 1:3: Select Design water appropriate depththe virtual through images water setting accordinggauge—also the conversion to the called image between digital features. water the image gauge, pixel image values water and actualdepth Stepscale. 2: identificationSelect The images unit of region,with the image obvious or scaleregion target is inof objects centimeterintere st(e.g., (ROI)—and per a white pixel (cm/pixel).wall, obtain a column). the The scale conversion regarding of thethe Stepimage 3:image Design scale to theisestimate performed virtual the wa as floodter follows. gauge—also water (1) depth The called target through objectsdigital setting identifiablewater the gauge, conversion in theimage image, waterbetween such depth asthe a whiteidentificationimage wall, pixel a telecommunications values region, and or actual region scale. cabinet, of intereThe an unitst electric (ROI)—and of the pole image and obtain scale so on, istheare in scalecentimeter used toregarding conduct per pixel the conversionimage(cm/pixel). to of estimate imageThe conversion scale. the flood (2) For of water thethe case imagedepth without scalthroughe any is performed fixedsetting and the obvious as conversion follows. target (1) objectsbetween The intarget the image pixel values and actual scale. The unit of the image scale is in centimeter per pixel (cm/pixel). The conversion of the image scale is performed as follows. (1) The target

Sensors 2019, 19 FOR PEER REVIEW 7

objects identifiable in the image, such as a white wall, a telecommunications cabinet, an electric pole and so on, are used to conduct the conversion of image scale. (2) For the case Sensors 2019, 19without, 825 any fixed and obvious target objects in the image, we went to scout the site7 of for 15 information to attain the image scale conversion between image pixels and on-site image,actual we size. went to scout the site for information to attain the image scale conversion between Stepimage 4: Use pixels the andtechnology on-site actual of image size. processing and image quantification to estimate the Step 4: Usewater the technology level. of image processing and image quantification to estimate the water level. StepStep 5: Correct 5: Correct error error in thein the estimated estimated water water level level and and store store it it in in the the databasedatabase forfor futurefuture warning issuingissuing (issued (issued by by the the implementation implementation unit) unit) and and water water level level situation situation notifying notifying (decided (decided by theby implementation the implementation unit). unit). In order to improve the accuracy of the estimated water level, the screen and filter method of CCTVIn images order towas improve necessary. the accuracyAccording of to the the estimated literature, water entropy, level, edge the screenvalue, DCT, and filter and methodSNR were of usedCCTV as images image wasfeatures necessary. in defining According whether to CCTV the literature, images entropy,can be used edge to value,estimate DCT, the andwater SNR level were or not.used The as imageentropy features was mainly in defining the “indetermination” whether CCTV images of the image can be pixel used value. to estimate As the the image water was level more or complex,not. The entropythe entropy was mainlyvalue would the “indetermination” be higher. For example, of the image if the pixel CCTV value. image As included the image raindrops, was more thecomplex, entropy the would entropy reduce value because would be the higher. texture For wa example,s smoother. if the Furthermore, CCTV image a included blurred raindrops,image would the alsoentropy get woulda low entropy reduce because value. The the textureedge value was smoother.was calculated Furthermore, through a different blurred image brightness would gradient also get values.a low entropy If the image value. quality The edge was value clea wasrer then calculated it would through get more different edge brightness line, otherwise gradient it would values. be If less the (forimage example quality wasimage clearer with then raindrops, it would image get more blurri edgeng, line, etc.), otherwise and the it wouldedge line be lesswould (for examplealso be discontinuousimage with raindrops, (Figure 4). image blurring, etc.), and the edge line would also be discontinuous (Figure4).

Figure 4.4.The The edge edge analysis. analysis. (A) ( CCTVA) CCTV image image in normal in normal case. ( Bcase.) The (B edge) The detection edge detection of A. (C) CCTVof A. (imageC) CCTV with image raindrops. with (D raindrops.) The edge detection(D) The edge of C. detection of C.

The DCT was was the the change change ratio ratio between between two two pixels pixels in in an an image image and and arrangement arrangement from from lower lower to higherto higher (blurry (blurry to clear). to clear). In DCT, In DCT, if the if the part part of a of low a low frequency frequency was was larger, larger, then then the the image image would would be morebe more blurred. blurred. If the If thepart part of a of high a high frequency frequency was was higher, higher, then then the theimage image would would be more be more clear. clear. In FigureIn Figure 5, 5the, the DCT DCT was was different different between between two two kinds kinds of of CCTV CCTV images images that that could be used toto estimateestimate water level or not. Finally, the screen standard of CCTV images was extracted in four features and built by J48 decision tree. The image can be used to estimate the water level. In this case, entropy, edge, DCT, and SNR were >7.1, >0.04, >0.1, and <3.5, respectively. Sensors 2019, 19 FOR PEER REVIEW 8 built by J48 decision tree. The image can be used to estimate the water level. In this case, entropy, edge, DCT, andSensors SNR 2019, 19 were FOR PEER >7.1, REVIEW >0.04, >0.1, and <3.5, respectively. 8 Sensors 2019, 19, 825 8 of 15 built by J48 decision tree. The image can be used to estimate the water level. In this case, entropy, edge, DCT, and SNR were >7.1, >0.04, >0.1, and <3.5, respectively.

(a) (a) (b) (b)

FigureFigure 5. The 5.FigureThe DCT 5. DCT The images DCT images images which which which can can can be be be used usedused to to estimate estimateestimate thethe water the water waterlevel level(a) levelor not (a) ( orba).) not or (notb). (b). For water level estimation, the CCTV camera images are firstly subjected to grayscale For waterwaterconversion level level estimation, andestimation, image theintensification. CCTVthe CCTV camera Since camera the images water images are surface firstly arefor subjected analysisfirstly issubjected to likely grayscale in ato conversiongrayscale conversionand image intensification.andnon-horizontal image plane,intensification. Since there the is watera need to surface Sinceperform forthehorizontal analysiswater correction issurface likely and define infor a non-horizontal theanalysis image scale is of likely plane, therein a is a need to performthe actual water horizontal depth before correction the analysis and starts. define Due to the the image fact that scalethe camera of the does actual not always water face depth before non-horizontalthe plane,front horizontally, there is ita is need necessary to perform to correct the horizontal level of the detectedcorrection object and into thedefine horizontal the imageto scale of the actual analysis water starts.facilitate depth Duethe beforedetection to the the of fact flood analysis that depth. the starts. This camera is Dueperformed does to the notmainly fact always bythat selecting the face camera four the points front does on horizontally, notthe always face it is thenecessary front horizontally, to correctimage, the the first it level istwo necessary ofof thewhich detected are to done correct objecton the the water into level thesurface of horizontal the and detected used to calculate facilitate object the into the slope detectionthe for horizontal of flood to depth. This iscorrecting performed to horizontal mainly by by image selecting rotation. fourThe latter points two onpoints the are image, done at thetow firsttarget twoobjects of with which are done facilitate the thedetection known actual of height.flood For depth. example, This suppose is performed that a window mainlyand the ground by selecting are known tofour be 130 points on the image,on the waterthe firstcm surface apart two and of and by which selecting used are to these calculate done two image onthe the positions, slope water we for surfacedetermine correcting andthe difference toused horizontal to between calculate the by two image the slope rotation. for Thecorrecting latter twoto pointshorizontal points to be 120 are by pixels. done image By at this, rotation. tow it can target be The extrapolated objects latter twothat with apoints pixel the amounts known are done to actual 1.08 at cm. tow height.During target the For objects example, with suppose thatanalysis, a window the ROI and function the groundis used to areselect known the water to depth be 130identification cm apart region and (Figure by selecting 6). This these two the known actualmethod height. can not only For increase example, the computing suppose efficien thatcy a but window also reduce and the theforeign ground object interference are known to be 130 cmimage apart positions, andin by the we selectingimage determine region these and theimprove two difference image the accuracy positi between of floodons, the depthwe two determine calculation points [9–11]. tothe be difference 120 pixels. between By this, itthe can two be pointsextrapolated to be 120 that pixels. a pixel By amounts this, it tocan 1.08 be cm.extrapolated During the that analysis, a pixel theamounts ROI function to 1.08 iscm. used During to select the analysis,the water the depth ROI identification function is used region to (Figureselect the6). Thiswater method depth canidentification not only increase region (Figure the computing 6). This methodefficiency can but not also only reduce increase the foreignthe computing object interference efficiency but in the also image reduce region the foreign and improve object theinterference accuracy inof the flood image depth region calculation and improve [9–11]. the accuracy of flood depth calculation [9–11].

Figure 6. Image scale calculation from CCTV image.

FigureFigure 6. 6.ImageImage scale scale calculation calculation fromfrom CCTV CCTV image. image.

Afterwards, the selected ROI was processed to remove the noise by using the median filter mask. The filter mask size is 1 × w, where w is half the width of ROI. Subjecting the ROI to the median filter processing can achieve the estimation of flood depth more efficiently as well as remove the noise, and the median filter does not interfere with the resulting accuracy. The main principle of the method is Sensors 2019, 19 FOR PEER REVIEW 9

Afterwards, the selected ROI was processed to remove the noise by using the median filter mask. The filter mask size is 1 x w, where w is half the width of ROI. Subjecting the ROI to the Sensors 2019, 19, 825 9 of 15 median filter processing can achieve the estimation of flood depth more efficiently as well as remove the noise, and the median filter does not interfere with the resulting accuracy. The main principle of thethat method the filter is maskthat the conducts filter mask the detection conducts and the comparisondetection and at ROI,comparison and the at pixels ROI, inand the the ROI pixels will bein thereplaced ROI will by thebe replaced median withinby the themedian mask within [12–14 the]. mask [12–14]. NowNow the the ROI ROI is is ready ready to to be be processed processed through through edge edge detection detection [15–18]. [15–18]. It It is is conducted conducted mainly mainly by by countingcounting thethe grayscalegrayscale values values of allof pixelsall pixels in the in ROI the andROI calculating and calculating the threshold the threshold value [19 value,20]. When [19,20]. the Whenhorizontal the pixelshorizontal accumulate pixels grayscale accumulate values grayscal greatere than values the thresholdgreater than value—the the threshold threshold value—the value is set thresholdto half the value width is of set ROI—it to half is determinedthe width asof theROI—it water is surface. determined Then, byas selectingthe water a pixelsurface. position Then, with by selectingknown height a pixel and position inputting with the corresponding known height actual and heightinputting in centimeters, the corresponding the actual wateractual level height height in centimeters,can be calculated the (Figureactual 7water). If there level are height several can measurements be calculated for the (Figure water surface,7). If there the current are several water measurementslevel is determined for the with water reference surface, to the the previous current water water level. level is determined with reference to the previous water level.

(a) (b) (c)

FigureFigure 7.7.The The processing processing result result of the of selected the selected target objecttarget region object in region the image—( in thea )image—( The originala) The ROI, original(b) The ROI ROI, processed (b) The throughROI processed median filter, through (c) The median ROI processed filter, (c through) The ROI edge processed detection. through edge detection. During monitoring, if the water level/water depth reaches the alert value (e.g., the water level of 112.5During m at Lansheng monitoring, Bridge if the is alert water level level/water 2), which dept is automaticallyh reaches the detected alert value by the (e.g., system, the water the system level ofwill 112.5 send m an at e-mailLansheng report Bridge to notify is alert ARMT level users. 2), which After is sending automatically the e-mail, detected the log by messagethe system, will the be systemrecorded will in send the process an e-mail database. report to The notify warning ARMT message users. After includes sending the supplier the e-mail, of the the image, log message image willSensorscaptured be 2019 recorded time,, 19 FOR system inPEER the REVIEW report process time, database. message The content warning description, message and includes the current the supplier picture, etc.of the (Figure image,810). image captured time, system report time, message content description, and the current picture, etc. (Figure 8).

FigureFigure 8. 8.TheThe alert alert message message report ofof ARMT ARMT by by e-mail. e-mail.

2.3. Performance of Image Water Level Identification Considering that there may be interference of foreign objects with the monitoring images, and hence, an estimation error exists during the automated estimation of flood by the system, it is necessary to correct the water level/water depth estimation error. This is executed through correction of the error point regarding the water level/water depth at the previous time point, as

follows: If the estimated water level/water depth (Wt ) at the current time (t) is greater than the threshold (thr) set in the system, then the correction of the image water level/water depth estimation ˆ is carried out so as to get the corrected water level/water depth (W t ) [21], the equation of which is shown as follows (Equation 2): W , W < thr Wˆ = t t t  ≥ (2) Wt-1, Wt thr

In order to determine the reliability of the calculated water level/water depth after error correction, the analytic result is examined through reliability estimation. The estimation error of the water level/water depth is defined as the absolute difference (Errori) between the real measured value observed by human eyes (Weyes,i) and the estimated value from the image identification technology (Wcomputer,i) regarding the water level of the ith image (Equation 3). The smaller the estimation error—the absolute difference (Errori)—is, the more reliable the image identification water level/water depth estimation is. In the AMRT, it adopts the non-contact method and combines the CCTV which is already built to estimate the water level/water depth with the human eye judges of the water level measuring the current water level through a hydrograph and simultaneously the actual value locally. They then compare it with the results calculated by AMRT. The reliability estimation of the analytic results is defined as Equation 4, which stands for the probability of the fact that the absolute difference (Errori) between the real measured value (Weyes,i) and the estimated value from the image identification technology (Wcomputer,i) is less than 50 cm. If the absolute difference is less than 50 cm, the image identification water level/water depth estimation is considered reliable. If not, it means the estimation error is too large and thus, unreliable. N denotes the number of the images of a monitoring station. Meanwhile, the root mean square error (RMSE) and mean absolute error (MAE) were applied to measure performance of the algorithm as shown in Equations 5 and 6.

Sensors 2019, 19, 825 10 of 15

2.3. Performance of Image Water Level Identification Considering that there may be interference of foreign objects with the monitoring images, and hence, an estimation error exists during the automated estimation of flood by the system, it is necessary to correct the water level/water depth estimation error. This is executed through correction of the error point regarding the water level/water depth at the previous time point, as follows: If the estimated water level/water depth (Wt) at the current time (t) is greater than the threshold (thr) set in the system, then the correction of the image water level/water depth estimation is carried out so as to get the corrected water level/water depth (Wˆ t)[21], the equation of which is shown as follows (Equation (2)): ( Wt, Wt < thr Wˆ t = (2) Wt−1, Wt ≥ thr

In order to determine the reliability of the calculated water level/water depth after error correction, the analytic result is examined through reliability estimation. The estimation error of the water level/water depth is defined as the absolute difference (Errori) between the real measured value observed by human eyes (Weyes,i) and the estimated value from the image identification technology th (Wcomputer,i) regarding the water level of the i image (Equation (3)). The smaller the estimation error—the absolute difference (Errori)—is, the more reliable the image identification water level/water depth estimation is. In the AMRT, it adopts the non-contact method and combines the CCTV which is already built to estimate the water level/water depth with the human eye judges of the water level measuring the current water level through a hydrograph and simultaneously the actual value locally. They then compare it with the results calculated by AMRT. The reliability estimation of the analytic results is defined as Equation (4), which stands for the probability of the fact that the absolute difference (Errori) between the real measured value (Weyes,i) and the estimated value from the image identification technology (Wcomputer,i) is less than 50 cm. If the absolute difference is less than 50 cm, the image identification water level/water depth estimation is considered reliable. If not, it means the estimation error is too large and thus, unreliable. N denotes the number of the images of a monitoring station. Meanwhile, the root mean square error (RMSE) and mean absolute error (MAE) were applied to measure performance of the algorithm as shown in Equations (5) and (6).

( 1, Weyes,i − Wcomputer,i < 50 cm Errori = , i = 1, 2, ··· , N (3) 0, Weyes,i − Wcomputer,i ≥ 50 cm

1 N Reliability = ∑ Errori (4) N i=1 v u n u 2 u ∑ Weyes,i − Wcomputer,i t i=1 RMSE = (5) n n ∑ Weyes,i − Wcomputer,i = MAE = i 1 (6) n According to the above analysis method, we evaluated the estimation error of the image identification regarding the historical images from four different kinds of surveillance camera images (from Baoqiao, Xinan Bridge, Shimen Reservoir, and County Donggang Activity Center which located in , , Taoyuan and Chiayi with respectively.) which all showed fluctuations of water level. The evaluated reliability of each surveillance station is 0.83, 0.87, 0.90, and 0.92, respectively, and the results are shown in Table1. From the results, it can be concluded that the reliability of the image identification estimation is about 88%. The reasons causing the estimation error include water Sensors 2019, 19, 825 11 of 15 droplet interference in the identification region and abnormality in the identification image (the results included the estimation error correction).

Table 1. Reliability analysis of image identification of historical data.

Image Scales |Error| < 50 Station Images Resolution Reliability (cm/pixel) cm (Number) Baoqiao 200 352 × 288 8.55 166 0.83 Xinan Bridge 200 352 × 240 3.76 174 0.87 Shimen Reservoir 180 352 × 240 0.60 162 0.90 Chiayi County Donggang 200 352 × 240 2.32 184 0.92

3. Experimental Results and Discussion In this study, not only the historical image data from four monitoring stations but also real event scenario data from six monitoring stations during a typhoon in Taiwan (typhoon Megi) were tested for water level identification and reliability. The reliability examination was performed by comparison between the image identification result and actual water level. In the historical event analysis, from each station 180 to 200 images were collected, respectively, in the sizes of 352 × 288 (pixels) and 352 × 240 (pixels) and image scales ranging from 0.60 to 8.55 cm/pixel. Under the condition that the estimation error of the identification was less than 50 cm, the average calculated reliability was 88% (Table1). In the real application of this developed technology in all-weather flood monitoring and analysis during a typhoon in Taiwan, data from six monitoring stations—Nangang Bridge, Xinan Bridge, Jiquan Bridge, Baoqiao Bridge, Shanggueishan Bridge, and Lansheng Bridge—were collected at a frequency of one image per 5 min, the image size of which was 704 × 480 (pixels) and the image scales ranged from 2.86 to 14.44 cm/pixel. Under the condition that the estimation error of the identification was less than 50 cm, the reliability of the water level estimation with respect to the flood monitoring was between 79% and 93% (Table2). The hit, miss, and false alarm statistics from six stations are listed in Table3. The rates of hit estimated from six stations were higher than 92%. The maximum false negative rate (Miss) and false positive rate (False) from six stations were 6% and 3%, respectively.

Table 2. Reliability analysis of image identification of typhoon Megi.

Image Scales |Error| < 50 cm Station Images Resolution Reliability RMSE (cm) MAE (cm) (cm/pixel) (Number) Nangang Bridge 198 704 × 480 14.44 185 0.93 95.04 30.25 Xinan Bridge 257 704 × 480 9.74 213 0.83 104.51 49.52 Jiquan Bridge 285 704 × 480 2.86 264 0.93 58.96 22.85 Baoqiao 340 704 × 480 3.45 316 0.93 21.77 10.79 Shanggueishan Bridge 338 704 × 480 10.06 267 0.79 42.09 40.30 Lansheng Bridge 156 704 × 480 7.21 141 0.90 41.41 21.86

Table 3. The hit, miss, and false alarm statistical measures from six stations sending notifications via e-mail. Hit means the assessment fit the actual situation. Miss means a false negative situation. False means a false positive situation.

Location Hit (Rate) Miss (Rate) False (Rate) Total Nangang Bridge 196 (0.990) 0 (0.000) 2 (0.010) 198 Xinan Bridge 256 (0.996) 0 (0.000) 1 (0.004) 257 Jiquan Bridge 282 (0.990) 0 (0.000) 3 (0.010) 285 Baoqiao 319 (0.938) 11 (0.032) 10 (0.030) 340 Shanggueishan Bridge 335 (0.991) 0 (0.000) 3 (0.009) 338 Lansheng Bridge 143 (0.917) 9 (0.057) 4 (0.026) 156

During typhoon Megi in September 2016, this technology was used to monitor the fluctuations of the river water level, and a warning was issued twice at Lansheng Bridge monitoring station. The water level around the station even reached full water level, which caused the local police and firefighters to Sensors 2019, 19 FOR PEER REVIEW 12

Shimen Reservoir 180 352 × 240 0.60 162 0.90 Chiayi County 200 352 × 240 2.32 184 0.92 Donggang

Table 2. Reliability analysis of image identification of typhoon Megi.

Image Scales |Error| < 50cm RMSE MAE Station Images Resolution Reliability (cm/pixel) (Number) (cm) (cm) Nangang Bridge 198 704 × 480 14.44 185 0.93 95.04 30.25

Xinan Bridge 257 704 × 480 9.74 213 0.83 104.51 49.52

Jiquan Bridge 285 704 × 480 2.86 264 0.93 58.96 22.85

Baoqiao 340 704 × 480 3.45 316 0.93 21.77 10.79 Shanggueishan 338 704 × 480 10.06 267 0.79 42.09 40.30 Bridge Lansheng Bridge 156 704 × 480 7.21 141 0.90 41.41 21.86

Table 3. The hit, miss, and false alarm statistical measures from six stations sending notifications via e-mail. Hit means the assessment fit the actual situation. Miss means a false negative situation. False means a false positive situation.

Location Hit (Rate) Miss (Rate) False (Rate) Total Nangang Bridge 196 (0.990) 0 (0.000) 2 (0.010) 198 Xinan Bridge 256 (0.996) 0 (0.000) 1 (0.004) 257 Jiquan Bridge 282 (0.990) 0 (0.000) 3 (0.010) 285 Baoqiao 319 (0.938) 11 (0.032) 10 (0.030) 340 Shanggueishan Bridge 335 (0.991) 0 (0.000) 3 (0.009) 338 Lansheng Bridge 143 (0.917) 9 (0.057) 4 (0.026) 156

Sensors 2019, 19,During 825 typhoon Megi in September 2016, this technology was used to monitor the fluctuations 12 of 15 of the river water level, and a warning was issued twice at Lansheng Bridge monitoring station. The water level around the station even reached full water level, which caused the local police and block roadsfirefighters urgently to block and banroads people urgently from and enteringban people to from avoid entering danger. to avoid In this danger. event, In monitoringthis event, image identificationmonitoring technology image identification was applied technology to obtain real-timewas applied water to obtain level real-time information water andlevel theinformation fluctuation data. In the courseand ofthe monitoring fluctuation waterdata. levelIn the changes course (Figureof monitoring9), the identificationwater level changes error point (Figure was 9), mainly the caused identification error point was mainly caused by the interference of water droplets on camera lenses by the interference of water droplets on camera lenses and resulted in identification errors (Figure 10). and resulted in identification errors (Figure 10).

Sensors 2019, 19 FOR PEER REVIEW( a ) (b) 13

(c) (d)

(e) (f)

(g)

FigureFigure 9. Real-time 9. Real-time image image water water level level monitoring monitoring and and analysis. analysis. (a (–af–)f) The The monitoring monitoring coursecourse ofof typhoon typhoon Megi. The white box represents the region for analysis. The white line in the box Megi. The white box represents the region for analysis. The white line in the box represents the represents the estimated water level. In the (f) panel, it is obvious that the flood water level estimatedexceeds water the analyzing level. In region; the (f) panel,hence, the it is wate obviousr level that could the not flood be correctly water level identified. exceeds (g the) The analyzing region;progress hence, of thethe water level level analysis could notof a bemonitoring correctly cycle. identified. Real line (g )was The shown progress for the of level the water of level analysisflood of alert a monitoring 2. The RMSE cycle. (root Real me linean square was shown error) forand the MAE level (mean of flood absolute alert error) 2. The were RMSE 41.41 (root mean squareand error) 21.86 andcm, respectively. MAE (mean The absolute water error)level is were define 41.41d in reference and 21.86 to cm, sea respectively.level (i.e., height The above water level is definedsea level in reference or elevation). to sea level (i.e., height above sea level or elevation).

(a) (b)

Sensors 2019, 19 FOR PEER REVIEW 13

(c) (d)

(e) (f)

(g) Figure 9. Real-time image water level monitoring and analysis. (a–f) The monitoring course of typhoon Megi. The white box represents the region for analysis. The white line in the box represents the estimated water level. In the (f) panel, it is obvious that the flood water level exceeds the analyzing region; hence, the water level could not be correctly identified. (g) The progress of the water level analysis of a monitoring cycle. Real line was shown for the level of flood alert 2. The RMSE (root mean square error) and MAE (mean absolute error) were 41.41 and 21.86 cm, respectively. The water level is defined in reference to sea level (i.e., height above sea level or elevation).Sensors 2019, 19 , 825 13 of 15

Sensors 2019, 19 FOR PEER REVIEW 14 (a) (b)

(c) (d)

Figure 10. Images with false diagnoses. (a–c) The lens image was obstructed by water droplets. Figure 10. Images(d) The water with level wasfalse out of diagnoses. the identification area.(a–c) The lens image was obstructed by water droplets. (d) The water level was out of the identification area. The monitoring image identification technology enables us to gain real-time water level information and fluctuation data. Although limited by the effects of the lens condition and lighting The monitoringenvironment, image it can be appliedidentification to early automatic technology warning. Once theenables analytic result us reachesto gain the warning real-time water level threshold, the system automatically sends an advance warning notice to the disaster prevention and information andresponse fluctuation unit in order data. to take Although protective measures limited as a meansby the of early effects prevention of the in the lens event condition of a and lighting environment, flood,it can including be applied the evacuation to ofearly people, automatic road closures, andwarning. flood control. Once the analytic result reaches the warning threshold,4. Conclusions the system automatically sends an advance warning notice to the disaster prevention and responseIn this study, unit the groundin order weather to take radar isprot employedective to identifymeasures the rainfall as a hotspot,means which of early prevention in is conducted through automated determination of cloud cover position and interpretation of the the event of a flood,centroid including of the cloud cover the in evacuation the image to define of people, the probable road region closures, range of the and precipitation. flood control. In addition, adding the existing CCTV footage to carry out real-time flood water level monitoring, 4. Conclusion the developed technology achieves an all-weather automated surveillance objective. Finally, the analytic quantified results from the image identification—informative image and quantified data—is immediately transmitted to relevant personnel via information and communications technology. In this study,At present, the theground early warning weather of floods radar mainly is depends empl onoyed on-site to measurement identify or the related rainfall sensor hotspot, which is conducted throughdevices toautomated obtain any information determination on the water level, of which cloud lacks imagescover and position often causes uncertaintyand interpretation of the of results. Therefore, this research adopted the surveillance camera images to the automated monitoring centroid of theand cloud identification cover ofin flood the events image and to further define obtainment the probable of flood water region level and range image of data. the precipitation. In addition, addingThis the method existing can not only CCTV reduce thefootage risk for personnel to carr duringy out on-site real-time surveying flood but also water measure level the monitoring, the water level, as the sensor did. developed technologyDuring achieves the early stage an of floodall-weather control and automated the management surveillance of river water levels, objective. hydrological Finally, the analytic quantified resultsengineering from technology, the image such as the identification—informative construction of embankments, is mainly andimage widely and adopted. quantified data—is immediately transmittedHowever, due to to the relevant extreme climate personnel change of the via past information decade, these engineering and communications constructions can technology. At present, the early warning of floods mainly depends on on-site measurement or related sensor devices to obtain any information on the water level, which lacks images and often causes uncertainty of results. Therefore, this research adopted the surveillance camera images to the automated monitoring and identification of flood events and further obtainment of flood water level and image data. This method can not only reduce the risk for personnel during on-site surveying but also measure the water level, as the sensor did. During the early stage of flood control and the management of river water levels, hydrological engineering technology, such as the construction of embankments, is mainly and widely adopted. However, due to the extreme climate change of the past decade, these engineering constructions can no longer catch up with the speed at which the disasters occur. As a result, besides hydrological engineering, new concepts like disaster mitigation and risk avoidance have been added to river management and control in recent years [22,23], such as opening reservoir gates in advance to discharge water, closing floodgates, evacuating people in low-lying areas in advance, and so forth. The existing early warning systems mainly depend on the water level information provided by the monitoring station through remote sensing images and numerical prediction systems, and they can provide large-scale flood warnings several hours, even 24 h, in advance [24,25]. Nevertheless, they show low specificity and non-real-time information, and a large-scale flood warning is not suitable for a small, localized region to mitigate disaster because it is difficult to cross-verify the predictions and the actual flood conditions. Decision makers cannot be provided with sufficient information for disaster management and plan disaster mitigation measures quickly. Therefore, ARMT aims to provide an automated flood monitoring system which exploits the existing CCTV cameras to provide current river images and image data of water level identification. Through this system, decision makers can get to know the on-site situation more quickly in order to evacuate people before a flood disaster occurs, or they can think of appropriate disaster mitigation measures after the disaster to reduce the impact of floods on urban areas.

Sensors 2019, 19, 825 14 of 15 no longer catch up with the speed at which the disasters occur. As a result, besides hydrological engineering, new concepts like disaster mitigation and risk avoidance have been added to river management and control in recent years [22,23], such as opening reservoir gates in advance to discharge water, closing floodgates, evacuating people in low-lying areas in advance, and so forth. The existing early warning systems mainly depend on the water level information provided by the monitoring station through remote sensing images and numerical prediction systems, and they can provide large-scale flood warnings several hours, even 24 h, in advance [24,25]. Nevertheless, they show low specificity and non-real-time information, and a large-scale flood warning is not suitable for a small, localized region to mitigate disaster because it is difficult to cross-verify the predictions and the actual flood conditions. Decision makers cannot be provided with sufficient information for disaster management and plan disaster mitigation measures quickly. Therefore, ARMT aims to provide an automated flood monitoring system which exploits the existing CCTV cameras to provide current river images and image data of water level identification. Through this system, decision makers can get to know the on-site situation more quickly in order to evacuate people before a flood disaster occurs, or they can think of appropriate disaster mitigation measures after the disaster to reduce the impact of floods on urban areas.

Author Contributions: S.-C.C. and T.-B.C. supervised the research and contributed to the manuscript organization. J.-H.W. contributed to in situ equipment, monitoring data acquisition, and visual sensor network. W.-C.D. gave partially revised suggestions and comments on this manuscript. S.-Y.H. conceived and developed the research framework, performed the experiments, analyzed the data, and wrote the manuscript. Funding: All authors thanks Water Resources Agency, Taiwan, R.O.C., for partially supporting this study financially with Contract No. MOEAWRA1060127. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Guhathakurta, P.; Sreejith, O.P.; Menon, P.A. Impact of Climate Change on Extreme Rainfall Events and Flood Risk in India. J. Earth Syst. Sci. 2011, 120, 359. [CrossRef] 2. Wade, S.D.; Rance, J.; Reynard, N. The UK Climate Change Risk Assessment 2012: Assessing the Impacts on Water Resources to Inform Policy Makers. Water Resour. Manag. 2013, 27, 1085–1109. [CrossRef] 3. Ratnapradipa, D. Guest Commentary: 2012 Neha/Ul Sabbatical Report Vulnerability to Potential Impacts of Climate Change: Adaptation and Risk Communication Strategies for Environmental Health Practitioners in the United Kingdom. J. Environ. Health 2014, 76, 28–33. [PubMed] 4. Tian, X.; ten Veldhuis, M.C.; See, L.; van de Giesen, N.; Wang, L.P.; Dagnachew Seyoum, S.; Lubbad, I.; Verbeiren, B. Crowd-Sourced Data: How Valuable and Reliable Are They for Real-Time Urban Flood Monitoring and Forecasting? EGU Gen. Assem. Conf. Abstr. 2018, 20, 361. 5. Dai, Q.; Rico-Ramirez, M.A.; Han, D.; Islam, T.; Liguori, S. Probabilistic Radar Rainfall Nowcasts Using Empirical and Theoretical Uncertainty Models. Hydrol. Process. 2015, 29, 66–79. [CrossRef] 6. Cristiano, E.; Veldhuis, M.C.T.; Giesen, N.V.D. Spatial and Temporal Variability of Rainfall and Their Effects on Hydrological Response in Urban Areas—A Review. Hydrol. Earth Syst. Sci. 2017, 21, 3859–3878. [CrossRef] 7. Berne, A.; Krajewski, W.F. Radar for Hydrology: Unfulfilled Promise or Unrecognized Potential? Adv. Water Resour. 2013, 51, 357–366. [CrossRef] 8. Otto, T.; Russchenberg, H.W. Estimation of Specific Differential Phase and Differential Backscatter Phase from Polarimetric Weather Radar Measurements of Rain. IEEE Geosci. Remote Sens. Lett. 2011, 8, 988–992. [CrossRef] 9. Kim, Y.J.; Park, H.S.; Lee, C.J.; Kim, D.; Seo, M. Development of a Cloud-Based Image Water Level Gauge. IT Converg. Pract. (INPRA) 2014, 2, 22–29. 10. Che-Hao, C.; Ming-Ko, C.; Song-Yue, Y.; Hsu, C.T.; Wu, S.J. A Case Study for the Application of an Operational Two-Dimensional Real-Time Flooding Forecasting System and Smart Water Level Gauges on Roads in City, Taiwan. Water 2018, 10, 574. [CrossRef] Sensors 2019, 19, 825 15 of 15

11. Narayanan, R.; Lekshmy, V.M.; Rao, S.; Sasidhar, K. A Novel Approach to Urban Flood Monitoring Using Computer Vision. In Proceedings of the 2014 IEEE International Conference on Computing, Communication and Networking Technologies (ICCCNT), Hefei, China, 11–13 July 2014; pp. 1–7. 12. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [CrossRef][PubMed] 13. Hoeher, P.; Kaiser, S.; Robertson, P. Two-Dimensional Pilot-Symbol-Aided Channel Estimation by Wiener Filtering. In Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, Munich, Germany, 21–24 April 1997; p. 1845. 14. Rokni, K.; Ahmad, A.; Solaimani, K.; Hazini, S. A New Approach for Surface Water Change Detection: Integration of Pixel Level Image Fusion and Image Classification Techniques. Int. J. Appl. Earth Obs. Geoinf. 2015, 34, 226–234. [CrossRef] 15. Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698. [CrossRef] 16. Kim, J.Y.; Kim, L.S.; Hwang, S.H. An Advanced Contrast Enhancement Using Partially Overlapped Sub-Block Histogram Equalization. IEEE Trans. Circuits Syst. Video Technol. 2011, 1, 475–484. 17. Abdullah-Al-Wadud, M.; Kabir, M.H.; Dewan, M.A.A.; Chae, O. A Dynamic Histogram Equalization for Image Contrast Enhancement. IEEE Trans. Consum. Electron. 2007, 53, 593–600. [CrossRef] 18. Hughes, D.; Greenwood, P.; Coulson, G.; Blair, G. Gridstix: Supporting Flood Prediction Using Embedded Hardware and Next Generation Grid Middleware. In Proceedings of the IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks, WoWMoM 2006, Buffalo-Niagara Falls, NY, USA, 26–29 June 2006; p. 6. 19. Pappenberger, F.; Beven, K.J.; Hunter, N.M.; Bates, P.D.; Gouweleeuw, B.T.; Thielen, J.; De Roo, A.P.J. Cascading Model Uncertainty from Medium Range Weather Forecasts (10 Days) through a Rainfall-Runoff Model to Flood Inundation Predictions within the European Flood Forecasting System (Effs). Hydrol. Earth Syst. Sci. Discuss. 2005, 9, 381–393. [CrossRef] 20. Sonka, M.; Hlavac, V.; Boyle, R. Image Processing, Analysis, and Machine Vision; Cengage Learning: Boston, MA, USA, 2014. 21. Gore, J.A.; Banning, J. Discharge Measurements and Streamflow Analysis. In Methods in Stream Ecology, 3rd ed.; Elsevier: Amsterdam, The Netherlands, 2017; Volume 1, pp. 49–70. 22. Ziegler, A.D. Water Management: Reduce Urban Flood Vulnerability. Nature 2012, 481, 145. [CrossRef] [PubMed] 23. Ziegler, A.D.; She, L.H.; Tantasarin, C.; Jachowski, N.R.; Wasson, R. Floods, False Hope, and the Future. Hydrol. Process. 2012, 26, 1748–1750. [CrossRef] 24. Lee, C.S.; Huang, L.R.; Chen, D.C. The Modification of the Typhoon Rainfall Climatology Model in Taiwan. Nat. Hazards Earth Syst. Sci. 2013, 13, 65–74. [CrossRef] 25. Lee, C.S.; Ho, H.Y.; Lee, K.T.; Wang, Y.C.; Guo, W.D.; Chen, D.Y.C.; Hsiao, L.F.; Chen, C.H.; Chiang, C.C.; Yang, M.J. Assessment of Sewer Flooding Model Based on Ensemble Quantitative Precipitation Forecast. J. Hydrol. 2013, 506, 101–113. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).