<<

sustainability

Article An Early Detection System Based on Image Block Threshold Selection Using Knowledge of Local and Global Feature Analysis

Ting Wei Hsu 1, Shreya Pare 2 , Mahendra Singh Meena 2, Deepak Kumar Jain 3, Dong Lin Li 4, Amit Saxena 5, Mukesh Prasad 2,* and Chin Teng Lin 2

1 Department of Electrical Engineering, National Chiao Tung University, Hsinchu 30010, Taiwan; [email protected] 2 School of Computer Science, FEIT, University of Sydney, Ultimo 2007, Sydney, Australia; [email protected] (S.P.); [email protected] (M.S.M.); [email protected] (C.T.L.) 3 Institute of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, ; [email protected] 4 Department of Electrical Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan; [email protected] 5 Department of Computer Science and Information Technology, Guru Ghashidash University, Bilaspur, Chhattisgarh 495009, India; amitsaxena65@rediffmail.com * Correspondence: [email protected]

 Received: 4 August 2020; Accepted: 22 October 2020; Published: 27 October 2020 

Abstract: Fire is one of the mutable that damage properties and destroy forests. Many researchers are involved in early warning systems, which considerably minimize the consequences of fire damage. However, many existing image-based fire detection systems can perform well in a particular field. A general framework is proposed in this paper which works on realistic conditions. This approach filters out image blocks based on thresholds of different temporal and spatial features, starting with dividing the image into blocks and extraction of flames blocks from image foreground and background, and candidates blocks are analyzed to identify local features of , source immobility, and flame flickering. Each local feature filter resolves different false-positive fire cases. Filtered blocks are further analyzed by global analysis to extract flame texture and flame reflection in surrounding blocks. Sequences of successful detections are buffered by a decision alarm system to reduce errors due to external camera influences. Research algorithms have low computation time. Through a sequence of experiments, the result is consistent with the empirical evidence and shows that the detection rate of the proposed system exceeds previous studies and reduces false alarm rates under various environments.

Keywords: feature extraction; video surveillance; image processing; fire detection; block-based analysis

1. Introduction Fire is one of the most uncontrollable phenomena with respect to time and space and directly endangers human life and property and nature. Based on the National Association (NFPA) report, 1,342,000 fire incidents were reported. Moreover, there was 10.6 billion U.S. Dollars in property damage in the U.S. in 2016 [1]. There are fast ways of detecting it: (1) detection (e.g., Fixed Detectors), (2) chemical-compound- detection (e.g., ionization and gas sensors), and (3) flame detection (e.g., and sensors) [2]. Traditional methods

Sustainability 2020, 12, 8899; doi:10.3390/su12218899 www.mdpi.com/journal/sustainability Sustainability 2020, 12, 8899 2 of 22 Sustainability 2020, 12, x FOR PEER REVIEW 4 of 22 have manyMuhammad disadvantages. et al. [30] One proposed of these disadvantages an early fire detection is transmission framework delay andbased detection on fine-tuned in outdoor CNN places,using for CCTV instance, surveillance forests andcameras. long tunnels.The authors Furthermore, proposed an sensors adaptive have prioritization massive deployment mechanism and for maintenanceautonomous costs. operations Early fire and detection a dynamic algorithms channel process selection visual algorithm information for reliable obtained data from dissemination. the static camera,For experimentation, which is incorporated a dataset with of 68,457 the techniquesimages and ofvideos CCTV was surveillance used, which systems. was a collection Fire detection of 62,690 consistsframes of from two mainFoggia’s categories, video dataset, flameand 226 smokeimages features. from Chino’s dataset, and images from a few other datasets.Most of Experimental the time, smoke results becomes verified visiblehigh accuracy before flamesof 94.39% in forestwith a fires false and positive helps of in 9.07%. detecting fires atClassical an early CNN stage, has which high in accuracy turn can in reduce a single the dangerousframe but fails situation to capture caused motion by it. However,information smokebetween detection frames. cannot To solve detect this at night issue, without Hu and appropriate Lu [31] proposed . a spatial-temporal based CNN algorithmTo detect for smoke real-time and detection flame, di andfferent video image smoke processing detection. techniques The algorithm can be uses used. a multitask During imagelearning processing,strategy to a recognize video sequence smoke is and given estimate as input optical to the flow system, by capturing and output inter-frame can be eithermotion an and image, intra- characteristicsframe appearance of an image,features or simultaneously. pre-decided parameters. To prove Itthe consists efficiency in the and processing effectiveness of the of 2Dthe image.method, Variousa self-created tasks can dataset be performed is used with using 157 image videos processing, comprising such of as86feature non-smoke extraction, videos classification, and 71 smoke andvideos. recognition The system of diff erenthas a patterns. 3.5% false The alarm process rate, of 97% fire developmentdetection rate, is dividedand processing into four time periods of 5 [ms2–4 per]: inception,frame. Govil fire growthet al. [32] period, proposed fully a smoke developed detection period, system and based decay on period, machine-learning-based as shown in Figure image1. Accordingrecognition to the software development and a cloud-based figure, detection workflow of fire duringthat is thecapable growth of periodscanning could hundreds reduce of property cameras damage;every minute. recently The many authors studies further have plan been to introduced leverage the to solveother earlyexisting fire fire detection detection problems algorithms using and surveillancecombine their cameras, information which use with machine the information vision that ob cantained detect from flame the basedauthor’s on imagecamera-based processing detection and diffsystemerent machine to provide learning a robust algorithms. and faster and combined system. Above are a few examples of CNN- based fire.

FigureFigure 1. Fire1. Fire development development periods. periods.

2. SystemFlame detection Overview classification can be categorized mainly into four different categories: color, texture, motion, and shape. The first type of classification is based on the color of the flame. Frequently, Our primary functionality of the framework was early detection of fire containing blazing flame the flame will exhibit yellow, yellow-red, and red . Depending on the color distribution of and not for detecting smoke or smoldering fire. The proposed system is shown in Figure 2 and fire, a fire model shows different color spaces. Chen et al. [5] and Celik et al. [6] concluded that consists of three basic phases, pre-processing, flame feature processing, and global feature fire must abide by the RGB color space distribution law,R G, G B, and, R RT; furthermore, processing. In pre-processing, input image is cropped into≥ a fixed≥ block position.≥ This block is used Horng et al. [7] suggested a unique color model in the HSI (Hue, Saturation, Intensity) color space as an observation unit, and analysis of identification of fire flame is applied to these blocks. In this depending on the saturation. This model can be applied to different brilliance environments. Therefore, phase, the movement foreground is found, and the background is eliminated, followed by connecting different environments can utilize the fire model more widely. [8] combined the RGB model and the the correct candidate’s blocks to construct the movable shape. The local block feature processing HIS model to identify the color of fire distribution. The major drawback of the RGB model is that phase consists of applying color analysis, fire source analysis, and disorder analysis. The candidate it does not consider the brightness difference in the fire due to the use of a camera. Celik et al. [6] blocks which pass the criteria for these steps are processed by a global feature analysis step, which proposed a flame color model based on YCbCr (Y is the luma component and Cb and Cr are the includes analysis of color variance and fire-surrounded areas. Succeeded image block candidates will blue-difference and red-difference chroma components) pace to efficiently identify the fire chroma and trigger an alarm decision unit to trigger an alarm. Figure 3 is the flowchart of block processing. The luminance. In this research, the flame color model also refers to the Celik method, in addition to further input is the gray level image sequence, and the output is candidate blocks with a moving property. improvements. Walia et al. [9] introduced a new approach using data fusion applied to the three-color There are two common methods for obtaining a foreground image: one is a temporal difference, the models. The Dezert–Smarandache theory (DSmT) of evidence fusion deals with disagreement in data other is background subtraction. obtained from color models RGB, HSV (Hue, Saturation and value), and YCbCr. Their conflicting Sustainability 2020, 12, 8899 3 of 22 mass redistribution is resolved using proportional conflict redistribution (PCR). PCR5 uses the logic of conjunctive rules. Wang et al. [10] introduce a K-means clustering of the two models HSV and YCbCr. The second category of flame detection is texture classification. The burning flame appears similar from a long distance; however, upon closer look, visible subtle changes are shown in nearby pixel blazing. A preset mask was introduced by Ebert et al. [11] to calculate and analyze the tendency of fire to be grouped around a central point, the spatial color variation of fire, and the temporal variation of fire intensity. Toreyin et al. [12] proposed a flame region color analysis using wavelet analysis. They used an ample amount of variation in the pixel brightness in the neighboring area of the fire zone. Absolute values of low and high bands were calculated using 2D wavelet analysis to define the parameters, and only objects with significant parameters as fire bodies were considered. As suggested by Zhao et al. [13], the texture parameter of the region of interest (ROI) can be calculated using the entropy formula. Zhao introduced two dynamic texture models [13]: Multi-resolution Analysis Linear Dynamic System (MRALDS) and Analysis Non-Linear Dynamic System (MRANLDS). The dynamic texture segmentation system was proposed by Paygude and Vyas [14] based on local and global spatiotemporal techniques. Dynamic texture representation was proposed by Nguyen et al. [15] based on hierarchical local patterns. The third grade is a classification of flame flickering. The prominent part of the fire is flame, which keeps blazing and swaying on its own or propagates specifically owing to with . This kind of swing may suddenly change the appearance of flame to red, and the red channel of the pixel then returns to the background color. Gunay et al. [16] proposed time-domain analysis and accumulation of detected fire color moving body’s pixel red channel. Dimitropoulos et al. [17] proposed fire-flame detection based on a computer vision approach. First, background subtraction is used, followed by a non-parametric model based on color analysis defining candidate fire regions in a frame. Various spatiotemporal features are then utilized to model fire behavior [18], for example, flickering, color probability, and spatial energy. Linear dynamic system and several system approaches are utilized in each candidate region. The spatiotemporal consistency energy of each candidate fire region is calculated with prior knowledge of the possible existence of fire in neighboring blocks from the current and previous frames, and finally, for classification of candidate regions, a two-class Support Vector Machine (SVM) [19] classifier is used. Dai Duong et al. [20] proposed a fire detection system based on fuzzy logic. Flame shape analysis is considered the fourth type of fire classification. Fire shape surface area usually fluctuates compared to the relatively smooth features of other objects. Borges [21] proposed a system to determine the exterior of the fire using objects’ circumference/length ratio and actual circumference length. Zhang et al. [22,23] proposed a system to detect boundaries of objects using the Laplacian operator and used Fourier analysis to detect fire contour. Ko et al. [24] proposed a novel fire-flame detection method based on visual features using fuzzy Finite Automata (FFA) and a probability density function. The method offers a systematic approach to handle irregularities in the computation systems. The system is capable of handling continuous spaces by combining methodologies of automata and fuzzy logic. Gaur et al. [25] described recent developments in the area of fire detection using video sequence. Higher detection rates are targeted by most of the literature published. Speed of fire detection and decreasing false alarm rates are the most preferred parameters while calculating the accuracy of the fire detection systems [26]. Most of the methods do not consider increasing fire detection rates, though some systems concentrate on detecting fire in the single frame or entire video. Using block processing, the proposed research framework can detect and locate the fire region accurately even in the worst case when fire and non-fire objects are in the same frame. In this paper, an efficient and robust fire detection system is proposed using local and global feature analysis for each candidate block. The proposed system is so computationally light that it can even be implemented on an embedded system for any real-time surveillance problem. A novel fire detection system based on the Convolutional Neural Network (CNN) model YOLO v3 (latest variant of a popular object detection algorithm YOLO—You Only Look Once) [27], Sustainability 2020, 12, 8899 4 of 22

R–FCN (Region-based Fully Convolutional Network), Faster-RCNN (Region-Based Convolutional Neural Networks) and SSD (single-shot detector) was proposed by Li and Zhao [28]. YOLO v3 performed well and had 83.7% accuracy at a speed of 28 FPS (frames per second). R–FCN had 83.3% accuracy at 5 FPS, Faster-RCNN had 84.9% accuracy at 3 FPS, and SSD had 82.8% accuracy at 16 FPS. The dataset used for experimentation had 29,180 images from the public fire image database. Images were pre-processed by resizing them to 500 375 and 375 500 for horizontal images and longitudinal × × images, respectively. Datasets had two object classes named “fire” and “smoke” and two classes “fire-like” and “smoke-like”. Mao et. al. [29] proposed a multichannel convolution-neural-network-based fire detection method. Three-channel colorful images were given as input to the CNN. It uses a hidden layer with multiple-layer pooling and convolution along with tuning the model parameter using backpropagation. Classification of fire recognition is conducted using the softmax method. The author used GPU (graphics processing unit) for training and testing the model. For experimentation 7000 and 4494 images were considered for training and testing, respectively. The method was compared with the existing support vector machine and deep neural network algorithms. Classification accuracy of 98% was reported by authors and outperformed existing methods in terms of precision rate, recall rate, ROC curve, and F1-score. Muhammad et al. [30] proposed an early fire detection framework based on fine-tuned CNN using CCTV surveillance cameras. The authors proposed an adaptive prioritization mechanism for autonomous operations and a dynamic channel selection algorithm for reliable data dissemination. For experimentation, a dataset of 68,457 images and videos was used, which was a collection of 62,690 frames from Foggia’s video dataset, 226 images from Chino’s dataset, and images from a few other datasets. Experimental results verified high accuracy of 94.39% with a false positive of 9.07%. Classical CNN has high accuracy in a single frame but fails to capture motion information between frames. To solve this issue, Hu and Lu [31] proposed a spatial-temporal based CNN algorithm for real-time detection and video smoke detection. The algorithm uses a multitask learning strategy to recognize smoke and estimate optical flow by capturing inter-frame motion and intra-frame appearance features simultaneously. To prove the efficiency and effectiveness of the method, a self-created dataset is used with 157 videos comprising of 86 non-smoke videos and 71 smoke videos. The system has a 3.5% false alarm rate, 97% detection rate, and processing time of 5 ms per frame. Govil et al. [32] proposed a smoke detection system based on machine-learning-based image recognition software and a cloud-based workflow that is capable of scanning hundreds of cameras every minute. The authors further plan to leverage the other existing fire detection algorithms and combine their information with the information obtained from the author’s camera-based detection system to provide a robust and faster and combined system. Above are a few examples of CNN-based fire.

2. System Overview Our primary functionality of the framework was early detection of fire containing blazing flame and not for detecting smoke or smoldering fire. The proposed system is shown in Figure2 and consists of three basic phases, pre-processing, flame feature processing, and global feature processing. In pre-processing, input image is cropped into a fixed block position. This block is used as an observation unit, and analysis of identification of fire flame is applied to these blocks. In this phase, the movement foreground is found, and the background is eliminated, followed by connecting the correct candidate’s blocks to construct the movable shape. The local block feature processing phase consists of applying color analysis, fire source analysis, and disorder analysis. The candidate blocks which pass the criteria for these steps are processed by a global feature analysis step, which includes analysis of color variance and fire-surrounded areas. Succeeded image block candidates will trigger an alarm decision unit to trigger an alarm. Figure3 is the flowchart of block processing. The input is the gray level image sequence, and the output is candidate blocks with a moving property. There are two common methods for obtaining a foreground image: one is a temporal difference, the other is background subtraction. Sustainability 2020, 12, 8899 5 of 22

Sustainability 2020, 12, x FOR PEER REVIEW 5 of 22 Sustainability 2020, 12, x FOR PEER REVIEW 5 of 22

FigureFigure 2. 2.Flame Flame detection detection system system overview. overview. Figure 2. Flame detection system overview.

Figure 3. Block processing. FigureFigure 3.3. Block processing. 2.1. Background Substraction 2.1. Background Substraction The first step of the system is moving the background from the image, and this is implemented The first step of the system is moving the background from the image, and this is implemented usingThe a firstGaussian step of mixture the system model is moving(GMM) the[13] bac to kgroundconstruct from the backgroundthe image, and image. this isFor implemented background using a Gaussian mixture model (GMM) [13] to construct the background image. For background usingmodeling, a Gaussian GMM mixtureis commonly model used, (GMM) as it [13] is one to coof nstructthe most the robust background methods. image. However, For background surveillance modeling, GMM is commonly used, as it is one of the most robust methods. However, surveillance modeling,videos observe GMM backgroundis commonly movement, used, as it foris one example, of the wavingmost robust leaves methods. and sparking However, light, surveillance which videos observe background movement, for example, waving leaves and sparking light, which leads videosto pixel observe changes background at several movement, specific intervals. for example, Therefore, waving the leaves mixture and model sparking (GMM) light, useswhich multiple leads to pixel changes at several specific intervals. Therefore, the mixture model (GMM) uses multiple toGaussian pixel changes distributions. at several The specific flow chart intervals. of the GMMTherefore, background the mixture construction model (GMM) is shown uses in Figure multiple 4. Gaussian distributions.distributions. The flowflow chart of the GMM background constructionconstruction isis shownshown inin FigureFigure4 4.. Sustainability 2020, 12, 8899 6 of 22

Sustainability 2020, 12, x FOR PEER REVIEW 6 of 22

FigureFigure 4. GaussianGaussian mixture mixture model model (GMM) (GMM) background background model construction.

Equation (1) shows the probability distribution of pixel having some specific specific values Xt atat time time tt..

k =×Xkωη μ X PX()tkttktkt,,, (, X , ) (1) P(Xt) = = ω η(Xt, µ , ) (1) k 1 k,t × k,t k,t k=1 where K is the number of distributions, μk,t is the mean, ωk,t represents the weight, Σk,t is the covariance matrixwhere Kofis the the k numberth Gaussian of distributions, in the mixtureµk at,t is time the mean,t, and ηω isk,t arepresents Gaussianthe probability weight, Σ densityk,t is the function covariance as shownmatrix ofin theEquationkth Gaussian (2). in the mixture at time t, and η is a Gaussian probability density function as shown in Equation (2). 11−1 ημ(,,XXX )=−−− exp( μ )T ( μ ) tt t1/2  t tt t t (2) (2π )n/2 2 1  X 1 t  1 TX−  η(Xt, µt, t) = exp (Xt µt) (Xt µt) (2) n/2 P 1/2 −2 − −  where n is the dimension of data. For(2π simplification) t of the computationt process, it is assumed that | | data of each channel are independent and have the same variance. Therefore, Equation (3) shows the where n is the dimension of data. For simplification of the computation process, it is assumed that covariance matrix. data of each channel are independent and have the same variance. Therefore, Equation (3) shows the covariance matrix. =σ 2 Xkt, kI (3)  2 k,t = σkI (3) The following Equations (4) and (5) update the weight, mean, and variance as shown. The following Equations (4) and (5) update the weight, mean, and variance as shown. ωαωα=− + kt,1++(1 ) kt , (M kt ,1 ) (4) μρμρωk,t+=−1 = (1 α) +ωk,t + α(Mk,t+1) ttt++11(1− ) X (4) µ + = (1 ρ)µt + ρX + t 1 − t 1 σ 22=−(1ρ )σ +ρμ (XX − )T ( − μ ) ttttt+++++2111112 T σ = (1 ρ)σ + ρ(Xt+1 µt+1) (Xt+1 µt+1 ) (5) t+1 − − − (5) ραη= (,)X + μσ ρ = αη( Xtktktt+1,1 µk,t, σk,t ,) where MMkk,t,t+1+1 isis 1 1 for thethe matchedmatchedmodel model and and 0 0 for for remaining remaining or or unmatched unmatched models, models,α is α a is learning a learning rate, rate,and secondand second learning learning rate ρrateis shown ρ is shown by Equation by Equation (4). (4). The weights are updated by the remaining Gaussi Gaussians.ans. For For unmatched unmatched distribution, distribution, the the weight, weight, variance, and mean of the last distribution are replaced by XtXt++ 11,, a a low low weight weight value value and and a a high variance, respectively. Constructed background background image image using using GMM GMM is shown in Figure 44.. FigureFigure5 5bb demonstrates the foreground object obtainedobtained by background subtraction. Sustainability 2020, 12, 8899 7 of 22 Sustainability 2020, 12, x FOR PEER REVIEW 7 of 22

(a) (b)

FigureFigure 5.5.Background Background subtraction subtraction as aas result a result of pre-processing. of pre-processing. (a) Original (a) Original image image (b) Foreground (b) Foreground image. image. 2.2. Object Movement Detection 2.2. ObjectIdentifying Movement objects’ Detection movements from video images sequence is implemented by dividing each imageIdentifying into non-overlapping objects’ movements blocks with from the video same images size insequence each block is implemented in the same image.by dividing The each first stepimage in into the process non-overlapping is finding outblocks which with blocks the same have size gray-level in each changes.block in the The same GMM image. approach The first extracts step foregroundin the process image is findingy, and Equation out which (6) isblocks used tohave computed gray-level the summationchanges. The of theGMM foreground approach image extracts for eachforeground block. image y, and Equation (6) is used to computed the summation of the foreground image   P for each block.  1, i f [ f oreground(x, y, t)] > T =  1 Sk  x,y Sk (6)  ∈  0, Otherwise > 1, if [ foreground ( x , y , t )] T1 where S is the kth block andS =x, y are the coordinates∈ of the scene. T is the predefined threshold. k k  xy, Sk 1 (6) Foreground regions found by the GMM approach may have partial or full static objects. Therefore, 0, Otherwise to overcome this problem, the process calculates the temporal difference between two successive frames. In the analysis of dynamic images, if the difference image has a value of 1 for any pixel, it is where Sk is the kth block and x, y are the coordinates of the scene. T1 is the predefined threshold. consideredForeground as a moving regions object found in theby frame.the GMM Video approach images usuallymay have have partial an increased or full signal-to-noise static objects. ratioTherefore, owing to to quantificationovercome this andproblem, intrinsic the electronic process noises.calculates Therefore, the temporal false segmentation difference between is produced two whensuccessive the di frames.fference In is takenthe analysis between of twodynamic successive images frames., if the Noise difference reduction image is achievedhas a value by of calculating 1 for any thepixel, addition it is considered of each block as a moving to find the object moving in the property. frame. Video The blockimages di ffusuallyerence have is defined an increased as signal- to-noise ratio owing to quantification and intrinsic electronic noises. Therefore, false segmentation is  P produced when the difference is1, takeni f between[ f (x ,twy, to) successivef (x, y, t frames.∆t)] > NoiseT reduction is achieved =  2 Tk  x,y Tk − − (7) by calculating the addition of each block to∈ find the moving property. The block difference is defined  as 0, Otherwise where Tk is the kth block, x,y are the coordinates of the scene, f is the input image, and T2 is the predefined threshold. −−Δ> 1, if [ fxyt ( , , ) fxyt ( , , t )] T2 T = ∈ If the temporal diffkerence and valuexyT, k of background subtraction are more substantial than(7) the predefined thresholds, the computation cost of the process decreases. A block image is considered a 0, Otherwise candidate containing moving objects using Equation (8). where Tk is the kth block, x, y are the coordinates( of the scene, f is the input image, and T2 is the predefined threshold. 1, i f Sk AND Tk = 1 Bk = (8) If the temporal difference and value 0,of backgrOtherwiseound subtraction are more substantial than the predefined thresholds, the computation cost of the process decreases. A block image is considered a The information of a particular block over time is considered as a “block process” in the following candidate containing moving objects using Equation (8). sections. Figure6 illustrates some results produced by block processing. 1, if S AND T = 1 =  kk Bk  (8) 0, Otherwise

The information of a particular block over time is considered as a “block process” in the following sections. Figure 6 illustrates some results produced by block processing. Sustainability 2020, 12, 8899 8 of 22

Sustainability 2020, 12, x FOR PEER REVIEW 8 of 22

Figure 6. DetectingDetecting moving moving objects objects as as a a result result of a pre-processing phase.

2.3. Flame Flame Color Color Analysis Analysis Identifying the the fire fire color model in in this research uses Celik et al.’s [6] [6] model, which is based on YCbCr color space. This This model model offers offers an error reductio reductionn of alarm rate and increased increased detection rate by by adaptively adjusting the surrounding glare. Equati Equationon (9) shows the linear transformation between RGB and YCbCr color spaces: YR  0.2568 0.5041 0.0979   16   Y   0.2568 0.5041 0.0979   R   16          Cb =− =  0.1482 −0.2910 0.4392   G  ++  128  (9) Cb  0.1482− 0.2910− 0.4392 G   128  (9)  Cr   0.4392 0.3678 0.0714   B   128  Cr0.4392−− 0.3678− − 0.0714  B 128 where Y ranges from 16 to 235, and Cb and Cr range from 16 to 240. For a frame, average values of where Y ranges from 16 to 235, and Cb and Cr range from 16 to 240. For a frame, average values of components Y, Cb, and Cr of the YCbCr color space are calculated by Equation (10): components Y, Cb, and Cr of the YCbCr color space are calculated by Equation (10): K 1 K1 P Ymean= = K Y(xi, yi) YYxymeani=1(, i i ) K i=1 K 1 P Cbmean = KK Cb(xi, yi) (10) 1 i=1 Cb= Cb(, x y ) meanPK i i (10) K 1= Crmean = Ki 1 Cr(xi, yi) Ki=1 = 1 Crmean Cr(, x i y i ) where (xi, yi) represent the pixel coordinates, K represents the screen’s total pixel quantity, K i=1 Ymean represents the frame’s average luminance value, Crmean represents blue average color concentration whereof diff erence(xi, yi) components,represent the and pixelCb coordinates,mean represents K represents the red average the screen’s color concentration total pixel quantity, of difference Ymean representscomponents. the The frame’s RGB average space fire luminance formula Rvalue,> Rmean Crmeanand represents R > G > B blue converted average to colorYCbCr concentrationformula using of difference components, and Cbmean represents the red average color concentration of difference components. The RGB space fire formula YR( x>, Rymean) > andCb(x R, y>) G > B converted to YCbCr formula using (11) Cr(x y) Cb(x y) Yxy(, ), > Cbxy> (,, ) (11) Therefore, Cr(, x y )> Cb (, x y ) ( ) 1, i f Y(x, y) > Cb(x, y)&Cr(x, y) > Cb(x, y) F(x, y) = (12) Therefore, 0, Otherwise >> Y (x, y) represents the luminosity1, ifYxyCbxyCrxyCbxy and ( ,F ( )x, y) represents ( , ) & the ( , conditional ) ( , function ) for the pixel’s Fxy(, )=  (12) compliance with the formula0, in Otherwise Equation (12). Cr (x, y) and Cb (x, y) represents the different chroma component of red and blue, respectively, for the coordinates (x, y). Y (x, Thisy) represents algorithm the is idealluminosity for general and yellowF (x, y) andrepresents red fire the color. conditiona However,l function some fires for withthe pixel’s colors compliancecloser to white, with high-intensity the formula in fire, Equation and high (12). temperature Cr (x, y) and cannot Cb (x be, y) detected represents correctly. the different Consequently, chroma component of red and blue, respectively, for the coordinates (x, y). This algorithm is ideal for general yellow and red fire color. However, some fires with colors closer to white, high-intensity fire, and high temperature cannot be detected correctly. Consequently, the color model is divided into two areas by Equation (18). The divisions are called high-intensity fire (Y > 220) and regular intensity fire (Y < 220) respectively. It adjusts the conditions as shown: Sustainability 2020, 12, 8899 9 of 22

theSustainability color model 2020, is12, divided x FOR PEER into REVIEW two areas by Equation (18). The divisions are called high-intensity9 fireof 22 (Y > 220) and regular intensity fire (Y < 220) respectively. It adjusts the conditions as shown:

fYxyf((Y ((x ,, y) > 220)220) (13)(13) ( ) 1,1, if i fY (Y x( ,x y, y ))>>> YY && CrCr( (x x, ,y y) )> Cb Cb(x (, xy ,) y ) FxyF(,(x, y ))= = meanmean (14) Otherwise (14) 0,0, Otherwise The equation changes because of high intensity, and Cb and Cr values are closer to the maximum value.The It is equation notable thatchanges from because various of observed high intensity, fire images and thatCb and the Cr pixels values with are fire closer usually to the have maximum greater luminosityvalue. It is thannotable the that average from luminosity various observed of the entire fire images picture, that and the the pixels red di withfference fire usually chroma have component greater isluminosity greater than than the averagethe average difference luminosity of the entire of the picture. entire The picture, luminosity and mustthe bered greater difference than thechroma blue dicomponentfference, and is greater the red than difference the average must difference be more significant of the entire than picture. the blue The di luminosityfference. The must fire be picture greater statisticsthan the ofblue the difference,YCbCr distribution and the red is demonstrateddifference must in be Figure more7 .significant than the blue difference. The fire picture statistics of the YCbCr distribution is demonstrated in Figure 7.

FigureFigure 7. 7.Distribution Distribution of of fire fire color color in inYCbCr YCbCrcolor color space. space.

FromFrom various various fire fire pictures, pictures, some some features features can can also also be observed. be observed. A pixel A pixel with with fire usually fire usually will have will higherhave higher luminosity luminosity than the than average the average luminosity luminosity of the entire of the picture, entire picture, and the redand dithefference red difference chroma componentchroma component will be higher will than be higher the average than di thefference averag ofe the difference entire picture. of the The entire blue dipicture.fference The chroma blue componentdifference willchroma be smaller component than thewill average be smaller of the than entire the picture.average The of the following entire picture. formula The expresses following the sumformula of the expresses features mentionedthe sum of the above: features mentioned above: ( ) 1, 1,if i Y f(, Y x(x y, )y><) > YYmean && CbCb (,( xx, y ) )< Cb mean& &Cr Cr(x, (,y x) > y )Cr >mean Cr F(x, y)= = mean mean mean (15) Fxy(, ) 0, Otherwise (15) 0, Otherwise The fire color pixel will have a value difference between the blue difference chroma component The fire color pixel will have a value difference between the blue difference chroma component and the red difference chroma component, as in the following: and the red difference chroma component, as in the following: ( ) 1, i f Cb(x, y) Cr(x, y) τ F(x, y) = 1, if Cb ( x , y )−−≥ Cr ( x , y )≥ τ (16) Fxy(, )= 0, Otherwise (16) 0, Otherwise where, τ represents constant with the default range of 0 to 224. Post-modified comparison of the secondwhere, equation τ represents is shown constant in Figure with8 thea,b. default range of 0 to 224. Post-modified comparison of the second equation is shown in Figure 8a,b. Sustainability 2020, 12, 8899 10 of 22 SustainabilitySustainability 2020 2020, ,12 12, ,x x FOR FOR PEER PEER REVIEW REVIEW 10 10 of of 22 22

((aa)) ((bb))

FigureFigure 8.8.8. Flame FlameFlame color colorcolor modeling modelingmodeling results. results.results. (a) Original ((aa)) OriginalOriginal image. image.image. (b) The ((b resultantb)) TheThe resultantresultant image based imageimage on Equationsbasedbased onon Equations(12)Equations and (13) (12) (12) under and and (13) di(13)fferent under under intensity different different situations. intensity intensity situations. situations.

ResultsResults retrievedretrieved fromfrom thethe firefirefire colorcolor modelmodel willwill usuallyusually includeinclude somesome objectsobjects thatthat havehave similarsimilar colorscolors as as fire firefire but but are are not not fire, fire,fire, suchsuch as as metal metal reflecting reflectingreflecting thethe firefirefire inin thethe picture.picture. Thus, Thus, simply simply using using color color forfor analysis analysisanalysis is is not notnot enough. enough.enough. Fire FireFire color colorcolor is isis just justjust a aa looser looserlooser feature featurefeature analysis. analysis.analysis. The TheThe experimental experimentalexperimental result resultresult is isis shown shownshown ininin Figure Figure 99: :9:

((aa)) ((bb))

FigureFigure 9.9. Fire Fire color colorcolor analysis analysisanalysis results. results.results. ( a ()(aa) Original) OriginalOriginal image imageimage (b ( )(b resultantb)) resultant resultant image imageimage based basedbased on Equations onon EquationsEquations (12) (12) and(12) and(13)and under(13)(13) under under different different different intensity intensity intensity situations. situations. situations.

InIn the the experiment, experiment, the the algorithm algorithm only only identified identifiedidentified the the general general red red and and yellow yellow fire firefire color. color. For For some some high-temperature,high-temperature, high-intensity high-intensity high-intensity fire, fire, fire, close close close to to white towhite white in in color, color, in color, it it cannot cannot it cannot be be accurately accurately be accurately detected. detected. detected. Thus, Thus, weThus,we cut cut we the the cut color colorthe model model color into modelinto two two into areas. areas. two One One areas. is is a a normal-intensity Onenormal-intensity is a normal-intensity fire fire ( (YY < < 220), 220), fire for for (Y which which< 220), we we for used used which the the aboveweabove used algorithm. algorithm. the above For For algorithm. high-intensity high-intensity For high-intensity fires fires ( (YY > > 200), 200), fires we we( Yadjust adjust> 200), the the we conditions conditions adjust the as as conditions follows: follows: as follows: > fYxyfYxy((f ( (Y (( ,x ,, y ) )) >> 220) 220)220) (17)(17) (17) 1,( if Y(,) x y>> Y & Cr (,) x y Cb (,) x y 1,1, ifi fY(,) Y x(x y, y)>>> YmeanY && CrCr( (,)x x, y y) > Cb Cb(x (,), xy) y Fxy(,( )=)= = meanmean (18) Fxy(,F x, y ) 0, otherwise (18)(18) 0, 0, otherwise otherwise The main reason for this situation is that for high intensity, Cr and Cb values will be close to the TheThe main main reason reason for for this this situation situation is is that that for for high high intensity, intensity, Cr Cr and and Cb Cb values values will will be be close close to to the the maximum value. There will not be a sufficient gap between the two values, so it is easy for it to not maximummaximum value. value. There There will will not not be be a a sufficient sufficient gap gap between between the the two two values, values, so so it it is is easy easy for for it it to to not not comply with complycomply with with Cb(x, y) < Cb (19) < mean CbCb(,(, x x y y ) )< Cb Cbmean (19)(19) Cb(x, y) Cr(x, ymean) τ (20) −≥− ≥τ The pre and post-modification|(,)(,) comparison|(,)(,)CbCb x x y y−≥ Crof Cr these x x y y twoτ formulas are shown in Figure 10.(20)(20)

TheThe pre pre and and post-modification post-modification comparison comparison of of these these two two formulas formulas are are shown shown in in Figure Figure 10. 10. Sustainability 2020, 12, x FOR PEER REVIEW 11 of 22

It can be seen from Figure 10 that the high-intensity fire can be detected by the modified algorithm. Conversely, false detection will increase. Since color is the first problem, other algorithms Sustainabilitywill be proposed2020, 12, to 8899 filter out these errors. When the pixels of the fire in the block are over the block’s11 of 22 total pixels by 10%, this block the fire color features.

(a) (b)

(c)

FigureFigure 10.10. High-intensity formulaformula modificationmodification comparisoncomparison ((aa)) OriginalOriginal imageimage ((bb)) firefire detectiondetection usingusing pre-modificationpre-modification formulaformula (c(c)) fire fire detection detection using using post-modification post-modification formula. formula.

2.4. FlameIt can Source be seen Analysis from Figure 10 that the high-intensity fire can be detected by the modified algorithm. Conversely, false detection will increase. Since color is the first problem, other algorithms will be A fire will have a burning source. Although the flame can move with the wind, the fire source proposed to filter out these errors. When the pixels of the fire in the block are over the block’s total location will not usually move or will expand with the fire, forming a slow movement. The burnt pixels by 10%, this block matches the fire color features. location will form a leftover area. Fire at the fire source contains two features, fixed position and high- 2.4.intensity Flame feature Source Analysisin the fire source area. 1, FirePixel ( x ,y )∈ Bi ( ,j ) A fire will have a burning source.= Although the flame can movek with the wind, the fire source Bijcount (, )  (21) location will not usually move or will expand0, Otherwise with the fire, forming a slow movement. The burnt location will form a leftover area. Fire at the fire source contains two features, fixed position and high-intensityThe method feature calculates in the firehigh-intensity source area. features by comparing average intensity of both entire image and block. If the average block intensity is higher than the average intensity of the image, then ( ) it matches the high-intensity feature of the fire1, starFirePixelting point.(x, y) ThenBk( thisi, j) candidate block high-intensity Bcount(i, j) = ∈ (21) state holding time is calculated. This high-intensity0, Otherwise state must be higher than t times to the fixed-position feature as mentioned in Equation (22): The method calculates high-intensity features by comparing average intensity of both entire image ++ > and block. If the average block intensityCount is higher than, Bk Ythe mean average IY mean intensity ) of the image, then it = matches the high-intensityBij featurecount (, of ) the fire starting point. Then this candidate block high-intensity(22) 0, Otherwise state holding time is calculated. This high-intensity state must be higher than t times to match the fixed-positionThus, the featurecandidate as mentioned block passes in Equationthe features (22): as shown in Equation (23): ( > ) Count1,+ Bijt +count, Bk (Y ,mean )> IY ) mean) Bcount(i,Bijj) =(, )= (22) FS 0, Otherwise (23) 0, Otherwise Thus, the candidate block passes the features as shown in Equation (23): 2.5. Flame Flickering Analysis ( ) 1, B (i, j) > t) B (i, j) = count (23) To filter out the noise effects,FS grayscale0, imageOtherwise subtractions are performed t−1 times, so the subtracted absolute value is higher than the threshold value, as shown in Equation (24): Sustainability 2020, 12, 8899 12 of 22

Sustainability2.5. Flame Flickering2020, 12, x FOR Analysis PEER REVIEW 12 of 22 To filter out the noise effects, grayscale image subtractions are performed t 1 times, so the − subtracted absolute value is higher than the threshold−−> value, as shown in Equation (24): 1, gray ( t ) gray ( t 1) TH gray TD(, x y )=  (24) (0, Otherwise ) 1, gray(t) gray(t 1) > THgray TD(x, y) = − − (24) 0, Otherwise The threshold THgray is a preset value obtained via empirical tries. Because the method uses the blockThe thresholdas an observationTHgray is unit, a preset a block value must obtained have a viacertain empirical number tries. of mixed Because intensity the method pixel to uses be consideredthe block as matching an observation this feature unit, with a block a default must have value. a certain number of mixed intensity pixel to be considered matching this feature with a default value. 1, TDcount ( x , y )>∈ TH ; TD ( x , y ) B ( i , j ) = count TD BijTD (, ) ( ) (25) 1, TDcount(x, y) > THcount; TD(x, y) BTD(i, j) B (i, j) =0, Otherwise ∈ (25) TD 0, Otherwise

2.6. Flame Flame Texture Analysis Using the Flame Colour model for detection is not sufficient sufficient to filter filter out some objects that have a similar color to flames,, for example, an orange carcar or yellow and red clothes. Fire is not a smooth texture pattern,pattern, asas illustratedillustrated inin Figure Figure. 11 11 Color Color variance variance can can measure measure the the texture. texture. To diToff differentiateerentiate the thetexture, texture, Equation Equation (27) (27) introduces introduces a weighted a weighted variance variance analysis analysis model. model.

N N P − μ 22  wxw()(ixi µ) =i=1 − (26) VarVar() I(I)= =i 1 (26) NN where µμis is the the mean mean value value and andCr value Cr value of the observedof the observed pixel within pixel the within connecting the connecting block is represented block is representedby Xi. The valueby Xi. ofTheCr valuewill of be Cr closer will be to thecloser maximum to the maximum value because value because of the high of the intensity high intensity of the ofobserved the observed pixel withpixel smallwith sm variationall variation of the of block.the block. For For high-intensity high-intensity situations, situations,Y Y> >220,220, whichwhich is increased by multiplying a preset valuevalue w;; otherwise,otherwise, w isis aa constant equalequal toto 1.1. Thus, the variance of the candidate block is appropriately calculated. The label of of a a specific specific connecting connecting block block is is represented represented by lƖ. Labels Labels help help in in combining complete complete objects, objects, such such as as for for a a block; block; if if the label is the same, then it can be concluded that they belong to the same object.object. Equation Equation (27) (27) represen representsts the different different variance formulas for didifferentfferent objects.objects. ( ) 1, Var1, ( Varl )>∈(lσ) > & σ Firepixel&Firepixel B ( (i i, ,j) j ) Bvar(i, j=) = ∈ (27) Bijvar (, ) 0, Otherwise (27) 0, Otherwise where the preset constant is represented by σ, which is the variance of the data collection that has the where the preset constant is represented by σ, which is the variance of the data collection that has the fire color model and a level of dispersion. Figure 12 shows the fire is not smooth textured. fire color model and a level of dispersion. Figure 12 shows the fire is not smooth textured.

(a) (b)

Figure 11. FlickeringFlickering analysis analysis using using the the temporal temporal difference difference ( (aa)) Original Original image image ( (bb)) Temporal differencedifference result. Sustainability 2020, 12, x FOR PEER REVIEW 13 of 22

Sustainability 2020, 12, 8899 13 of 22 Sustainability 2020, 12, x FOR PEER REVIEW 13 of 22

Figure 12. Flame texture variance results of two similar images. FigureFigure 12. 12. FlameFlame texture texture variance variance results of of two two similar similar images. images. 2.7. Flame2.7. Area Flame Analysis Area Analysis 2.7. Flame Area Analysis Fire areaFire analysis area analysis is a novel is a novel proposed proposed method. method. ItIt filters filters out out a new a new type type of frequent of frequent false alarms. false alarms. These falseTheseFire alarmsarea false analysis alarms are arerelated is relateda novel to to proposedobjects that that method. have have similar Itsimilar filters flame characteristics,outflame a new characteristics, type for of example, frequent for street false example, alarms. street lightsThese andand false car car headlights, headlights,alarms are as related as demonstrated demonstrated to objects in Figure that in Figurehave 13. similar 13. flame characteristics, for example, street lights and car headlights, as demonstrated in Figure 13.

Figure 13. False detection of flashing lights. FigureFigure 13. 13. FalseFalse detection detection of of flashing flashing lights. lights.

FlashingFlashingFlashing red red light redlight lightnot not notonly only only changes changes changes decorative decorative featuresfeaturesand and and matches matches matches the the flame the flame color,flame color, but color, alsobut alsobut also matchesmatches the thefire’s fire’s flickering flickering feature feature and and fire fire source. source. However, the the di differencefference between between flashing flashing light light matches the fire’s flickering feature and fire source. However, the difference between flashing light and fireand is fire their is their rate rate of area of area change. change. Flashing Flashing lights lights usually willwill instantly instantly shine shine and and instantly instantly darken, darken, and fire is their rate of area change. Flashing lights usually will instantly shine and instantly darken, whichwhich will willhave have rapid rapid changes changes in in the the area ofof firefire color color detection. detection. Flame Flame area area analysis analysis is implemented is implemented in whichin two willtwo main have main steps stepsrapid (extending (extending changes candidate candidatein the area region region of fire and and waveletcolor wavelet detection. analysis). analysis). Flame area analysis is implemented in two main steps (extending candidate region and wavelet analysis). Sustainability 2020, 12, 8899 14 of 22

2.7.1. Extending Flame Region A fire’s area will shrink and expand with the wind and expand with the movement of the fire, but the surface area’s sudden change rate will be at a particular rate. Therefore, a new feature analysis scheme with extended observation is needed. Since the prior local block analysis has identified candidate blocks of the high probability of flame detection, these candidate blocks are used as a benchmark to find the topmost, bottommost, leftmost, and rightmost strong candidate block and extend out one block coordinates as observation area boundary. In Figure 14, the red block in the left side Figure 14a are the strong candidate block that match the four feature analysis and the right side SustainabilityFigure 2020 14, 12b, arex FOR the PEER extended REVIEW observation area. 14 of 22

(a) (b)

Figure 14.Figure (a) Images 14. (a) Images before before extension extension andand (b ()b images) images after extension.after extension.

2.7.1. Extending2.7.2. 1-D Flame Wavelet Region Analysis Wavelet analysis is used for feature extraction to extract the high-frequency signal. The method A fire’sconverts area the will fire shrink surface and of the expand observation with area the into wind the and 1D continuous expand with signal. the One-dimensional movement of the fire, but the surfacewavelet conversionarea’s sudden analysis change is short-period rate will and be considers at a particular changes rate. in the Therefore, past and future, a new which feature makes analysis scheme itwith suitable extended for transient observation signal analysis. is needed. The primary Sinc concerne the prior with alocal fire surface block is theanalysis instantaneous has identified candidaterate blocks of change of the of area. high To probability determine the of high flame frequency detection, of the curve,these acandidate filter bank ofblocks the proposed are used as a method is fed by input signal x[n], as shown in Figure 15, where, the high pass filter coefficients are benchmark to find the topmost, bottommost, leftmost, and rightmost strong candidate block and represented by h[n] and low pass filter coefficients are represented by g[n]. extend out one block coordinates as observation area boundary. In Figure 14, the red block in the left side Figure 14a are the strong candidate block that match the four feature analysis and the right side Figure 14b are the extended observation area.

2.7.2. 1-D Wavelet Analysis Wavelet analysis is used for feature extraction to extract the high-frequency signal. The method converts the fire surface of the observation area into the 1D continuous signal. One-dimensional wavelet conversion analysis is short-period and considers changes in the past and future, which makes it suitable for transient signal analysis. The primary concern with a fire surface is the instantaneous rate of change of area. To determine the high frequency of the curve, a filter bank of the proposed method is fed by input signal x[n], as shown in Figure 15, where, the high pass filter coefficients are represented by h[n] and low pass filter coefficients are represented by g[n].

Figure 15. Single wavelet transform. Sustainability 2020, 12, x FOR PEER REVIEW 14 of 22

(a) (b) Figure 14. (a) Images before extension and (b) images after extension.

2.7.1. Extending Flame Region A fire’s area will shrink and expand with the wind and expand with the movement of the fire, but the surface area’s sudden change rate will be at a particular rate. Therefore, a new feature analysis scheme with extended observation is needed. Since the prior local block analysis has identified candidate blocks of the high probability of flame detection, these candidate blocks are used as a benchmark to find the topmost, bottommost, leftmost, and rightmost strong candidate block and extend out one block coordinates as observation area boundary. In Figure 14, the red block in the left side Figure 14a are the strong candidate block that match the four feature analysis and the right side Figure 14b are the extended observation area.

2.7.2. 1-D Wavelet Analysis Wavelet analysis is used for feature extraction to extract the high-frequency signal. The method converts the fire surface of the observation area into the 1D continuous signal. One-dimensional wavelet conversion analysis is short-period and considers changes in the past and future, which makes it suitable for transient signal analysis. The primary concern with a fire surface is the instantaneous rate of change of area. To determine the high frequency of the curve, a filter bank of the proposed method is fed by input signal x[n], as shown in Figure 15, where, the high pass filter coefficientsSustainability are represented2020, 12, 8899 by h[n] and low pass filter coefficients are represented by g[n].15 of 22

FigureFigure 15. 15.SingleSingle wavelet wavelet transform.transform.

The values of high- and low-pass filter coefficients are {1/2, 1/4, 1/2} and { 1/2, 1/4, 1/2}, − − respectively. Figure 16a,b show the trend of fire surface area changes and the flashing light area Sustainabilitychanges, respectively. 2020, 12, x FOR PEER As shown REVIEW in Figure 16, changes in flashing light areas are more dramatic 15 of 22 compared to changes in fire surface area. Equation (28) demonstrates the observations based on TheFigure values 16. of high- and low-pass filter coefficients are {1/2, 1/4, 1/2} and {−1/2, 1/4, −1/2}, respectively. Figure 16a,b show the trend of fire surface areaP changes and the flashing light area changes, n xH[n] respectively. As shown in Figure 16, changesβ = in flashing light areas are more dramatic compared(28) to n changes in fire surface area. Equation (28) demonstrates the observations based on Figure 16.

(a) (b)

FigureFigure 16. FlameFlame surface surface area area ratio ratio varies varies betw betweeneen the the flame flame and and non-flame non-flame object: object: ( (aa)) flame flame object. object. ((b)) non-flame non-flame object.

This feature is used to accumulate the fire surface area changing rate over a period. x []n n is representative of the amount of data, and βis then featureH inversely proportional to the fire probability. β = (28) n This feature is used to accumulate the fire surface area changing rate over a period. n is representative of the amount of data, and β is the feature inversely proportional to the fire probability.

2.8. Alarm Decision Unit Image-based detection systems often receive the wrong image information from the camera due to external influences, such as sudden shaking or change in brightness caused by strong winds or fast object movement. The system adopts an alarm buffer to identify false alarms by eliminating these types of influences, as shown in Figure 17. All false alarms from the image sequence are collected by buffer. The system produces a ratio of alarm data and non-alarm data for a specific period. When the detected intervals exceed 50% of buffer size, the decision unit triggers an alarm. Meanwhile, it neglects uneven detected intervals which are related to camera noise and sudden brightness.

Figure 17. Alarm decision unit.

3. Experimental Results For experimentation, we used six non-fire videos and ten fire videos. Though the number of videos is small, the variety of videos makes the dataset perfect for training and testing. The proposed system detects only one false alarm out of ten fire videos, whereas existing systems such as Chen et Sustainability 2020, 12, x FOR PEER REVIEW 15 of 22

The values of high- and low-pass filter coefficients are {1/2, 1/4, 1/2} and {−1/2, 1/4, −1/2}, respectively. Figure 16a,b show the trend of fire surface area changes and the flashing light area changes, respectively. As shown in Figure 16, changes in flashing light areas are more dramatic compared to changes in fire surface area. Equation (28) demonstrates the observations based on Figure 16.

(a) (b)

Figure 16. Flame surface area ratio varies between the flame and non-flame object: (a) flame object. (b) non-flame object.

x []n  n H β = (28) n This feature is used to accumulate the fire surface area changing rate over a period. n is representativeSustainability 2020 of, 12the, 8899 amount of data, and β is the feature inversely proportional to the fire16 of 22 probability. 2.8. Alarm Decision Unit 2.8. Alarm Decision Unit Image-based detection systems often receive the wrong image information from the camera due toImage-based external influences, detection such systems as sudden often shaking receive orthe change wrong in image brightness information caused from by strong the camera winds due or fast to externalobject movement. influences, Thesuch system as sudden adopts shaking an alarm or change buffer in to brightness identify falsecaused alarms by strong by eliminating winds or fast these objecttypes movement. of influences, The assystem shown adopts in Figure an alarm17. All buffer false alarmsto identify from false the image alarms sequence by eliminating are collected these by typesbuff ofer. influences, The system as produces shown in a Figure ratio of 17. alarm All false data alarms and non-alarm from the dataimage for sequence a specific are period. collected When by the buffer.detected The system intervals produces exceed 50% a ratio of buofff alarmer size, data the and decision non-alarm unit triggers data for an a alarm. specific Meanwhile, period. When it neglects the detecteduneven intervals detected exceed intervals 50% which of buffer are related size, tothe camera decision noise unit and triggers sudden an brightness. alarm. Meanwhile, it neglects uneven detected intervals which are related to camera noise and sudden brightness.

FigureFigure 17. 17.AlarmAlarm decision decision unit. unit. 3. Experimental Results 3. Experimental Results For experimentation, we used six non-fire videos and ten fire videos. Though the number of videos For experimentation, we used six non-fire videos and ten fire videos. Though the number of is small, the variety of videos makes the dataset perfect for training and testing. The proposed system videos is small, the variety of videos makes the dataset perfect for training and testing. The proposed detects only one false alarm out of ten fire videos, whereas existing systems such as Chen et al.[4] system detects only one false alarm out of ten fire videos, whereas existing systems such as Chen et and Celik et al. [5] detects only nine fire videos each, with three and four false-alarms, respectively. The approach successfully detected videos related to global features that failed to be identified by Chen et al. and Celik fire. Thus, it shows that flame area analysis is consistent with the experimental evidence. Although this approach has a high detection rate and lowers the false alarm rate, it has a longer reaction time due to buffering information of previous detections. The proposed system is developed using C++ programming language on Windows 7 operating system installed on a computer with 2GB RAM and Intel Core i5 2.53 GHz processor. The video database has a minimum resolution of 320 240. × 3.1. Testing the Algorithm and Accuracy Discussions Many images of different environmental conditions are applied to this approach, including outdoor, indoor, and sunlight variation, each containing fire events, motorcycle, pedestrians, waving, bicycles, leaves, tourist coaches, and trailers. These datasets include 13,257 positive samples as fire events and 62,614 ordinary moving objects as negative samples. Equation (29) shows the detection rate and false alarm rate to understand the effects of each component in the proposed methods.

N Detection Rate = detected 100% N f ire × N (29) False Alarm Rate = f alse detected 100% Nnon f ire − × Table1 shows the false alarm rates and detection rates of various fire detection methods without using an alarm decision unit (ADU). For detection rate, the false-positive rate is high. The fire source shows a fixed location feature and higher flame intensity. This showcases good effects in moving objects or light that does not have enough intensity. Temporal difference represents that the fire in a fixed block shows continuous movement. This feature cannot eliminate non-moving (street lights) lights but can eliminate those with fixed positions. Using global features for verification, the variance can filter out objects from fire, such as objects with smooth textures of colors like yellow and red color Sustainability 2020, 12, 8899 17 of 22 cars. After matching the previous feature values, the block becomes a strong candidate block and can have a high likelihood of being a fire body. Finally, the last judgment will be based on the fire surface area analysis. As shown in the table, each feature complements the other in filtering false positives. Every algorithm creates a complementary relationship with each other. Using variance as a strong feature, most false positives rates can be lowered.

Table 1. Experimental results without ADU based on a single frame.

Analysis Method False Alarm Rate Detection Rate Reaction Time (Sec) Strong candidate 3.0% 91.2% – Variance analysis 5.3% 92.2% – Fire source analysis 18.7% 93.7% – Temporal difference 24.4% 93.3% – analysis Fire area analysis 2.8% 90.2% 2.95 Fire color analysis 35.6% 99.2% –

Table2 represents the false alarm rate, detection rates, and reaction time by using global and flame analysis. Table2 shows enhancement in false alarm rates by adding Arithmetic Decision Unit (ADU). The proposed method reduces the false alarm rate from 2.8% to 0.8% at the cost of increased reaction

Table 2. Experimental results with ADU based on a single frame.

Analysis Method False Alarm Rate Detection Rate Reaction Time (sec) Local + Global analysis 2.8% 90.2% 2.95 Local + Global analysis + ADU 0.8% 89.8% 3.45

Table3 exhibits the performance comparison of this research with the other two existing methods by Chen et al. [5] and Celik et al. [6]. The proposed method detects ten fire videos correctly, whereas existing methods could only detect nine videos each. The proposed method had only one false-alarm, whereas existing methods had three and four false alarms, respectively, for Chen et al.’s [5] and Celik et al.’s [6] methods.

Table 3. Performance comparison results.

Methods Detected Fire Videos False Alarmed Videos Reaction Time (sec) Chen et al. [4] 9 3 2.27 Celik et al. [5] 9 4 1.69 Proposed System 10 1 3.45

3.2. Detection in Different Environments The proposed method has been tried in indoor and outdoor environments, and it performed exceptionally well in both. Figures 18–22 shows the experimental setup for reflection on metal, indoor environment, outdoor environment, with high-intensity fire, and in an extremely dark area. The correct detections are marked in a red rectangle, and the false detections are identified by green. The false detection proof from the fire area analysis algorithm detects the moving objects which emit light, (e.g., cars) as shown in Figure 18. Figure 18 shows non-fire test videos, most of which are traffic videos or pedestrian videos. Since these videos generally do not have fire characteristics, no strong candidate blocks are generated, and they will not produce a global candidate fire region. Therefore, we show the foreground movement candidate block to represent that this video has a foreground moving objects. The picture of top right corner in Figure 18 is a special case. The entire screen is reddish hued, and color signal judgment becomes useless. However, we can still rely on other fire features to accurately Sustainability 2020, 12, x FOR PEER REVIEW 17 of 22

Local + Global analysis 2.8% 90.2% 2.95 Local + Global analysis + ADU 0.8% 89.8% 3.45

Table 3 exhibits the performance comparison of this research with the other two existing methods by Chen et al. [5] and Celik et al. [6]. The proposed method detects ten fire videos correctly, whereas existing methods could only detect nine videos each. The proposed method had only one false-alarm, whereas existing methods had three and four false alarms, respectively, for Chen et al.’s [5] and Celik et al.’s [6] methods.

Table 3. Performance comparison results.

Methods Detected Fire Videos False Alarmed Videos Reaction Time (Sec) Chen et. al. [4] 9 3 2.27 Celik et. al. [5] 9 4 1.69 Proposed System 10 1 3.45

3.2. Detection in Different Environments The proposed method has been tried in indoor and outdoor environments, and it performed exceptionally well in both. Figures 18–22 shows the experimental setup for reflection on metal, indoor environment, outdoor environment, with high-intensity fire, and in an extremely dark area. The correct detections are marked in a red rectangle, and the false detections are identified by green. The false detection proof from the fire area analysis algorithm detects the moving objects which emit light, (e.g., cars) as shown in Figure 18. Figure 18 shows non-fire test videos, most of which are traffic videos or pedestrian videos. Since these videos generally do not have fire characteristics, no strong candidate blocks are generated, and they will not produce a global candidate fire region. Therefore, we show the foreground movement candidate block to represent that this video has a foreground moving Sustainability 2020, 12, 8899 18 of 22 objects. The picture of top right corner in Figure 18 is a special case. The entire screen is reddish hued, and color signal judgment becomes useless. However, we can still rely on other fire features to accuratelyjudge if judge a fire existsif a fire or not.exists Figure or not. 22b Figure shows flashing 22b shows redcar flashing lights. Thisred car event lights. is in lineThis with event most is ofin line withthe most fire features,of the fire including features, fire including color, fire source, fire colo temporalr, fire disource,fference, temporal and variance difference, analysis. and The variance fire surface area feature was finally used for filtering out. analysis. The fire surface area feature was finally used for filtering out.

FigureFigure 18. 18. FalseFalse detections detections ofof moving moving objects. objects. Sustainability 2020, 12, x FOR PEER REVIEW 18 of 22

(a)

(b)

Figure 19. Outdoor environments: (a) with wind and (b) without wind. Figure 19. Outdoor environments: (a) with wind and (b) without wind. Sustainability 2020, 12, 8899 19 of 22

Sustainability 2020, 12, x FOR PEER REVIEW 19 of 22 Sustainability 2020, 12, x FOR PEER REVIEW 19 of 22

FigureFigure 20. 20.Flame Flame detection detection illustrates illustrates atat various scenarios scenarios in indifferent different environments. environments. Figure 20. Flame detection illustrates at various scenarios in different environments.

Figure 21. Fire reflection on metal on two different time frames. FigureFigure 21. 21.Fire Fire reflection reflection on on metal metal onon two different different time time frames. frames. Sustainability 2020, 12, 8899 20 of 22

Sustainability 2020, 12, x FOR PEER REVIEW 20 of 22

(a)

(b) Figure 22. Fire detection in (a) an extremely dark space and (b) flashing red light. Figure 22. Fire detection in (a) an extremely dark space and (b) flashing red light.

4. Conclusions Conclusions and and Future Future The proposed system is capable of automaticallyautomatically determining the threshold mechanism and performing a robust block selection. It It is an idealized model for fire fire detection because it operates in various environments byby usingusing locallocal andand globalglobal featurefeature analysis analysis for for each each candidate candidate block block along along with with a adecision decision unit. unit. The The proposed proposed work work includes includes three three local local feature feature models. models. Each Each model model calculates calculates the firethe firefeatures features and and filters filters out out a specific a specific false false alarm alarm to provide to provide a higher a higher decision decision rate. rate. The The global global features features are areused used for thefor analysisthe analysis of flame, of flame, with surroundingwith surrounding blocks blocks to identify to identify fire objects fire objects against against similar similar bodies bodieswith similar with similar behavior. behavior. False alarm False caused alarm bycaused instantaneous by instantaneous changes changes can be rectified can be rectified using decision using decisionunits. Proposed units. Proposed work can work be improvedcan be improved further further by eliminating by eliminating false alarm false ratesalarm caused rates caused by minor by minorshaking shaking of the cameraof the camera due to heavydue to windheavy and wind diff anerentd different flame colors flame due colors to combustion due to combustion from diff erentfrom differentfuel and gas. Moreover,and gas. Moreover, smoke detection smoke featuredetection extraction feature isextraction a viable modelis a viable for early model detection for early of detectionforest fire, of where forest firefire, does where not fire appear does in not video appear images. in video For images. specific For scenarios specific such scenarios as smoldering such as smolderingfire, detection fire, could detection be used could for detecting be used a for volcanic detecting eruption. a volcanic One of eruption. the main One advantages of the ofmain the advantagesproposed system of the is proposed its low computation system is its complexity. low computation Thus, the complexity. proposed systemThus, the can proposed be implemented system canon embedded be implemented systems on for embedded real-time systems applications for real of-time surveillance applications systems. of surveillance systems.

Author Contributions:Contributions: Conceptualization,Conceptualization, T.W.H., T.W.H., D.L.L, D.L.L, and and M.P.; M. methodology,P.; methodology, T.W.H., T.W.H., S.P., and S.P., M.P.; and software, M.P.; software,D.K.J. and D.K.J. M.S.M.; and validation, M.S.M.; validation, M.S.M., A.S., M.S.M., D.K.J., A.S., and C.T.L.;D.K.J., formaland C.T.L.; analysis, formal T.W.H., analysis, S.P., D.L.L.,T.W.H., A.S., S.P., and D.L.L., M.P.; A.S.,investigation, and M.P.; M.S.M., investigation, A.S., D.L.L., M.S.M., and A.S., C.T.L.; D.L.L., resources, and C.T.L.; T.W.H., resources, S.P., M.S.M., T.W.H., D.K.J., S.P., M.S.M., and M.P.; D.K.J., data and curation, M.P.; T.W.H., D.K.J., and A.S.; writing—original draft preparation, T.W.H. and S.P.; writing—review and editing, T.W.H., dataS.P., M.S.M.,curation, D.K.J., T.W.H., D.L.L., D.K.J., A.S., and M.P., A.S.; and writing—original C.T.L.; visualization, draft S.P.,preparation, M.S.M., D.K.J., T.W.H. and and A.S.; S.P.; supervision, writing—review D.L.L. and C.T.L.;editing, project T.W.H., administration, S.P., M.S.M., D.L.L.D.K.J., andD.L.L., M.P.; A.S., funding M.P., acquisition, and C.T.L.; D.L.L., visualization, M.P., and S.P., C.T.L. M.S.M., All authors D.K.J., haveand A.S.;read andsupervision, agreed to D.L.L. the published and C.T.L.; version project of theadministration, manuscript. D.L.L. and M.P.; funding acquisition, D.L.L., M.P., andFunding: C.T.L.This All authors work was have supported read and in agreed part by to the the Australian published Research version of Council the manuscript. (ARC) under Grant DP180100670 Funding:and Grant This DP180100656, work was supported in part by in the part U.S. by Army the Austra Researchlian Research Laboratory Council under (ARC) Agreement under Grant W911NF-10-2-0022, DP180100670 and in part by the Taiwan Ministry of Science and Technology under Grant MOST 106-2218-E-009-027-MY3 and andMOST Grant 108-2221-E-009-120-MY2. DP180100656, in part by the U.S. Army Research Laboratory under Agreement W911NF-10-2-0022, and in part by the Taiwan Ministry of Science and Technology under Grant MOST 106-2218-E-009-027-MY3 and MOSTConflicts 108-2221-E-009-120-MY2 of Interest: The authors. declare no conflict of interest.

ConflictsReferences of Interest: The authors declare no conflict of interest.

References1. Haynes, H.J. Fire Loss in the United States During 2014; National Fire Protection Association, Fire Analysis and Research Division: Quincy, MA, USA, 2015. 1. Haynes, H.J. Fire Loss in the United States During 2014; National Fire Protection Association, Fire Analysis 2. Quintiere, J.G. Principles of Fire Behavior; CRC Press: Boca Raton, FL, USA, 2016. and Research Division: Quincy, MA, USA, 2015. 2. Quintiere, J.G. Principles of Fire Behavior; CRC Press: Boca Raton, FL, USA, 2016. Sustainability 2020, 12, 8899 21 of 22

3. Quintiere, J.G.; Wade, C.A. Compartment fire modeling. In SFPE Handbook of Fire Protection Engineering; Springer: New York, NY, USA, 2016; pp. 981–995. 4. Hartin, E. Fire Behavior Indicators: Expertise. 2008. Available online: www.firehouse.com (accessed on 27 October 2020). 5. Chen, J.; Mu, X.; Song, Y.; Yu, M.; Zhang, B. Flame recognition in video images with color and dynamic features of flames. J. Auton. Intell. 2019, 2, 30–45. [CrossRef] 6. Celik, T.; Demirel, H. Fire detection in video sequences using a generic color model. Fire Saf. J. 2009, 44, 147–158. [CrossRef] 7. Horng, W.B.; Peng, J.W.; Chen, C.Y. A New Image-Based Real-Time Flame Detection Method Using Color Analysis. In Proceedings of the 2005 IEEE International Conference on Networking, Sensing and Control (ICNSC), Tucson, AZ, USA, 19–22 March 2005; pp. 100–105. 8. Hong, W.B.; Peng, J.W. A Fast Image-ased Fire Flame Detection Method Using Color Analysis. Tamkang J. Sci. Eng. 2008, 11, 273–285. 9. Walia, G.; Gupta, A.; Rajiv, K. Intelligent fire-detection model using statistical color models data fusion with Dezert–Smarandache method. Int. J. Image Data Fusion 2013.[CrossRef] 10. Zhao, Y.; Zhao, J.; Dong, E.; Chen, B.; Chen, J.; Yuan, Z.; Zhang, D. Dynamic Texture Modeling Applied on Computer Vision Based Fire Recognition. In Artificial Intelligence and Computational Intelligence; Deng, H., Miao, D., Lei, J., Wang, F.L., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 545–553. 11. Ebert, J.; Shipley, J. Computer vision based method for fire detection color videos. Int. J. Imaging 2009, 2, 22–34. 12. Toreyin, B.U.; Dedeoglu, Y.; Gudukbay, U.; Cetin, A.E. Computer Vision-Based Method for Real-Time Fire and Flame Detection. Pattern Recognit. Lett. 2006, 27, 49–58. [CrossRef] 13. Zhao, J.; Zhang, Z.; Han, S.; Qu, C.; Yuan, Z.; Zhang, D. SVM Based Forest Fire Detection Using Static and Dynamic Features. Comput. Sci. Inf. Syst. 2011, 8, 821–841. [CrossRef] 14. Paygude, S.; Vyas, V. Unified dynamic texture segmentation system based on local and global spatiotemporal techniques. Int. J. Reason. Based Intell. Syst. 2019, 11, 170–180. 15. Nguyen, T.T.; Nguyen, T.P.; Bouchara, F. Dynamic texture representation based on hierarchical local patterns. In International Conference on Advanced Concepts for Intelligent Vision Systems; Springer: Cham, Switzerland, 2020; pp. 277–289. 16. Gunay, O.; Tasdemir, K.; Toreyin, B.U.; Cetin, A.E. Fire Detection in Video Using LMS Based Active Learning. Fire Technol. 2010, 46, 551–557. [CrossRef] 17. Dimitropoulos, K.; Barmpoutis, P.; Grammalidis, N. Spatio-temporal Flame Modeling and Dynamic Texture Analysis for Automatic Video-Based Fire Detection. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 339–351. [CrossRef] 18. Avula, S.B.; Badri, S.J.; Reddy, G. A Novel Forest Fire Detection System Using Fuzzy Entropy Optimized Thresholding and STN-based CNN. In Proceedings of the IEEE International Conference on COMmunication Systems & NETworkS (COMSNETS), Bengaluru, India, 7–11 January 2020; pp. 750–755. 19. Manevitz, L.M.; Yousef, M. One-class SVMs for document classification. J. Mach. Learn. Res. 2001, 139–154. 20. Dai Duong, H.; Nguyen, D.D.; Ngo, L.T.; Tinh, D.T. On approach to vision based fire detection based on type-2 fuzzy clustering. In Proceedings of the 2011 International Conference of Soft Computing and Pattern Recognition (SoCPaR), Dalian, China, 14–16 October 2011; pp. 51–56. 21. Borges, P.V.K.; Izquierdo, E.A. Probabilistic Approach for Vision-Based Fire Detection in Videos. IEEE Trans. Circuits Syst. Video Technol. 2010, 20, 721–731. [CrossRef] 22. Zhang, R.; Cheng, Y.; Li, Y.; Zhou, D.; Cheng, S. Image-based flame detection and combustion analysis for blast furnace raceway. IEEE Trans. Instrum. Meas. 2019, 68, 1120–1131. [CrossRef] 23. Zhang, Z.; Zhao, J.; Zhang, D.; Qu, C.; Ke, Y.; Cai, B. Contour-Based Forest Fire Detection using FFT and Wavelet. In Proceedings of the International Conference on Computer Science and Software Engineering, Hubei, China, 12–14 December 2008; Volume 1, pp. 760–763. 24. Ko, B.C.; Ham, S.J.; Nam, J.Y. Modeling and Formalization of Fuzzy Finite Automata for Detection of Irregular Fire Flames. IEEE Trans. Circuits Syst. Video Technol. 2011, 21, 1903–1912. [CrossRef] 25. Gaur, A.; Singh, A.; Kumar, A.; Kumar, A.; Kapoor, K. Video Flame and Smoke Based Fire Detection Algorithms: A Literature Review. Fire Technol. 2020, 56, 1943–1980. Sustainability 2020, 12, 8899 22 of 22

26. Wu, H.; Wu, D.; Zhao, J. An intelligent fire detection approach through cameras based on computer vision methods. Process Saf. Environ. Prot. 2019, 127, 245–256. [CrossRef] 27. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. 28. Li, P.; Zhao, W. Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 2020, 19, 100625. [CrossRef] 29. Mao, W.; Wang, W.; Dou, Z.; Li, Y. Fire recognition based on multi-channel convolutional neural network. Fire Technol. 2018, 54, 531–554. [CrossRef] 30. Muhammad, K.; Ahmad, J.; Baik, S.W. Early fire detection using convolutional neural networks during surveillance for effective management. Neurocomputing 2018, 288, 30–42. [CrossRef] 31. Hu, Y.; Lu, X. Real-time video fire smoke detection by utilizing spatial-temporal ConvNet features. Multimed. Tool. Appl. 2018, 77, 29283–29301. [CrossRef] 32. Govil, K.; Welch, M.L.; Ball, J.T.; Pennypacker, C.R. Preliminary Results from a Wildfire Detection System Using Deep Learning on Remote Camera Images. Remote Sens. 2020, 12, 16631. [CrossRef]

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).