algorithms

Article Improved Bilateral Filtering for a Gaussian Pyramid Structure-Based Image Enhancement Algorithm

Chang Lin 1,2,3, Hai-feng Zhou 1,3,* and Wu Chen 1

1 Marine Engineering College and Key Laboratory of Province Marine and Ocean Engineering, , 361021, ; [email protected] (C.L.); [email protected] (W.C.) 2 School of Mechanical and Electrical Engineering, University, Putian 351100, China 3 Key Laboratory of Modern Precision Measurement and Laser Nondestructive Detection Colleges and Universities in Fujian Province, Putian 351100, China * Correspondence: [email protected]; Tel.: +86-199-5959-6511

 Received: 7 November 2019; Accepted: 26 November 2019; Published: 1 December 2019 

Abstract: To address the problem of unclear images affected by occlusion from fog, we propose an improved Retinex image enhancement algorithm based on the Gaussian pyramid transformation. Our algorithm features bilateral filtering as a replacement for the Gaussian function used in the original Retinex algorithm. Operation of the technique is as follows. To begin, we deduced the mathematical model for an improved bilateral filtering function based on the spatial domain kernel function and the pixel difference parameter. The input RGB image was subsequently converted into the Hue Saturation Intensity (HSI) color space, where the reflection component of the intensity channel was extracted to obtain an image whose edges were retained and are not affected by changes in brightness. Following reconversion to the RGB color space, color images of this reflection component were obtained at different resolutions using Gaussian pyramid down-sampling. Each of these images was then processed using the improved Retinex algorithm to improve the contrast of the final image, which was reconstructed using the Laplace algorithm. Results from experiments show that the proposed algorithm can enhance image contrast effectively, and the color of the processed image is in line with what would be perceived by a human observer.

Keywords: bilateral filtering; Gaussian down-sampling; luminance; enhancement; Retinex

1. Introduction With the development of intelligent monitoring technology, a large number of cameras have been deployed in intelligent cars, traffic monitoring, and military investigation. However, weather conditions have a great influence on the quality of video images. In particular, some foggy weather may need image enhancement rather than proceeding to the next step such as image monitoring [1–3]. The scattering of atmospheric particles in the fog introduces defects such as uneven exposure, low contrast, and serious color distortion in images captured by a Charge-coupled Device(CCD) camera. Since it is not possible to engineer these devices to eliminate the influence of atmospheric scattering due to the physical limitations of CCD sensors, the development of algorithms that reduce the effect of fluctuations in incident light—to improve the quality of images—is of great significance. Retinex theory was originally developed by Edwin Land as a model for explaining human color perception [4–6]. Jobson et al. [7,8] subsequently extended this theory into a general-purpose image enhancement algorithm to address problems caused by changing the lighting and atmospheric conditions inherent to the process of acquiring images of Earth from space. The single-scale Retinex (SSR) algorithm provides the dynamic range compression, color constancy, and sharpening that are required to alleviate these problems. In essence, based on convolution with a Gaussian function with a

Algorithms 2019, 12, 258; doi:10.3390/a12120258 www.mdpi.com/journal/algorithms Algorithms 2019, 12, 258 2 of 14 specific scale factor, it operates by separating incident light components from reflected light components to improve the contrast in an image. As such, it is suitable for processing multispectral satellite imagery, and applicable in diverse areas such as aerospace exploration [9], aviation safety [10,11], medical radiography [12], underwater photography [13,14], forensic investigations [15], and general-purpose photography [16,17]. Several improvements have been made to the SSR algorithm since its initial development, in order to achieve a balance between dynamic range compression and tonal rendition. As the SSR’s image enhancement effect depends on the scale factor of the Gaussian function, the multi-scale Retinex (MSR) algorithm [8] was developed to combine dynamic range compression, color consistency, and tonal rendition, producing images that compare favorably with human visual perception. Here, the input image is processed using three different scale factors (small, intermediate, and large), to ensure that the color fidelity and resolution of the image are retained. In the multi-scale Retinex with color restoration (MSRCR) algorithm, a supplementary color restoration process is included with the MSR algorithm to compensate for the color desaturation inherent in its operation. The resulting algorithm yields images with good color rendition even for severe gray-world violations. In Refs. [18] and [19], a different image enhancement framework based on the Retinex algorithm was proposed, in which the Gaussian filter (GSF) used in the traditional Retinex algorithm is replaced with a region covariance filter (RCF). In contrast to GSFs, for each pixel, the output of an RCF depends on the covariance between local image features. Therefore, an RCF is self-adapting to different pixels in an image, and it can thus estimate the nature of incident illumination more accurately than a GSF. Because of its ability to enhance contrast, eliminate noise, and enhance details in an image, the RCF–Retinex algorithm can be considered to have the best overall performance of the Retinex-based enhancement methods. Zhang et al. [20] proposed a mixed multi-Retinex method combined with fractional differentiation that can adjust its ambient illumination processing capacity. Hence, the dynamic range of an image captured in low light conditions can be modified to reduce noise caused by poor lighting and enhance the quality of nighttime images. A new technique, similar to conventional enhancement methods in which the illumination component of the processed image is manipulated, was introduced in [21]. The technique makes use of an augmented Lagrange multiplier based algorithm for optimization of simultaneous estimation of a smoothed illumination component and the reflectance component of the image. While Retinex-based algorithms are able to enhance most images adequately, the output image can sometimes be optimized through visual analysis, depending on the user’s experience. As previously stated, with traditional Retinex methods, the image enhancement effect depends on the scale factor of the Gaussian function, leading to some limitations. Hence, we propose an improved Retinex image enhancement algorithm based on Gaussian pyramid transformation with bilateral filtering to eliminate the impact of outliers in the histogram of the image and further improve its contrast, thus improving the overall visual performance of the Retinex algorithm. The rest of the paper is structured as follows. The improved Retinex model is briefly described in Section2. Details of the improved bilateral filtering function are discussed in Section3. An overview of the Gaussian–Laplacian multi-scale pyramid algorithm is given in Section4. Analysis of the results of experiments is given in Section5, and conclusions are presented in Section6.

2. Improved Retinex Model

2.1. Single-Scale Retinex Model The SSR model is a color constant visual image enhancement method based on Land’s theory of human visual perception. In this model, the color of an object is considered to be fixed, while the corresponding image obtained by the camera is composed of a low-frequency incident light component and a high-frequency component resulting from the reflections of light on the surface of the object, as shown in Figure1. Algorithms 2019, 12, 258 3 of 14 Algorithms 2019, 12, x FOR PEER REVIEW 3 of 15

FigureFigure 1. ImageImage capture capture principle principle according according to to Retinex Retinex theory. theory.

For a given image, the relationship between incident and reflected reflected light components can be expressed as S(x, y) = L(x, y) R(x, y), (1) S(x, y) = L(x, y)·⋅ R(x, y) , (1) where S(x, y), R(x, y), and L(x, y) denote the full color component, the reflected light component, and wherethe incident S(x, lighty) , componentR(x, y) , ofand a givenL(x, y image,) denote respectively. the full color component, the reflected light component,The principle and the of incident traditional light Retinex component image of enhancementa given image, is respectively. to remove or reduce the impact of incidentThe lightprinciple components of traditional on an Retinex image by image deconstructing enhancement this is into to remove separate or incident reduce andthe impact reflection of incidentcomponents. light Thecomponents contrast of on the an reflection image by image deconstruc is consequentlyting this into enhanced, separate the incident texture informationand reflection of components.the image is compressed The contrast within of the a certainreflection range, image and is at consequently the same time, enhanced the basic, tone the oftexture the original information image ofis retained.the image For is compressed ease of calculation, within a logarithms certain range, of both and sidesat the of same (1) are time, taken, the basic yielding tone the of following:the original image is retained. For ease of calculation, logarithms of both sides of (1) are taken, yielding the following: ln(S(x, y)) = ln(L(x, y)) + ln(R(x, y)). (2) = + The mathematical form ofln( theS reflection(x, y)) ln( componentL(x, y)) ofln( theR image(x, y)) can. be obtained by rearranging(2) (2) asThe follows: mathematical form of the reflection component of the image can be obtained by ln(R (x, y)) = ln(S (x, y)) ln(L (x, y)) (3) rearranging (2) as follows: i i − i where Si(x, y), Ri(x, y), and Li(x, y) denote= the color component,− the reflection component, and the ln(Ri (x, y)) ln(Si (x, y)) ln(Li (x, y)) (3) incident light component of the i-th color channel, respectively. whereInspectionSi (x, y) , ofR (3)i (x shows, y) , and thatLi the(x, reflectiony) denote component the color component, of the image the can reflection be obtained component, by deducing and the incident light component component. of Hence, thei -th the color accuracy channel, of incident respectively. light estimation directly affects the fog removalInspection effect. Sinceof (3) Lshowsis low-frequency that the reflection information, component a Gaussian of the convolutionimage can be function obtained can by bededucing used to thebetter incident estimate light the component. incident light Hence, component the accuracy from theof incident image,i.e., light estimation directly affects the fog removal effect. Since L is low-frequency information, a Gaussian convolution function can be used L(x, y) = G(x, y) S(x, y), (4) to better estimate the incident light component from the∗ image, i.e., where represents the convolutionL operation,(x, y) = GG((xx,,y))* isS a(x Gaussian, y) , function, defined as G(x, y(4)) = ∗ x2+y2 1 exp( ), and σ is the standard deviation of the Gaussian function, also known as the where2πσ2 ∗ −represents2σ2 the convolution operation, G(x,y) is a Gaussian function, defined as scale factor. 1 x2 + y 2 G(x, y) = exp(− ) , and σ is the standard deviation of the Gaussian function, also 2.2. HSI Color2πσ Space2 2σ 2 knownThe as HSI the colorscale spacefactor. is an alternative to the RGB color space that better reflects human perception of color information. With this model, color is encoded using three different attributes: Hue, saturation, 2.2.and HSI intensity. Color Space Here, hue and saturation are related to human perception of color, while, in contrast, intensityThe reflectsHSI color the apparentspace is brightnessan alternative of an image,to the determinedRGB color byspace the lightingthat better conditions. reflects Intensityhuman perceptionis calculated of as color follows: information. With this model, color is encoded using three different attributes: R(x, y) + G(x, y) + B(x, y) Hue, saturation, and intensity. Here,I(x, y )hue = and saturation are related to human perception of color,(5) 3 while, in contrast, intensity reflects the apparent brightness of an image, determined by the lighting whereconditions.R, G, Intensity and B are is the calculated red, green, as follows: and blue components of an RGB-encoded image, respectively. ++ Ixy(, )= R(,xy ) Gxy (, ) Bxy (, ) 3 (5) Algorithms 2019, 12, x FOR PEER REVIEW 4 of 15 whereAlgorithms R ,2019G , 12and, 258 B are the red, green, and blue components of an RGB-encoded image,4 of 14 respectively. 2.3. The Improved Retinex Algorithm 2.3. The Improved Retinex Algorithm The dependence of the SSR’s enhancement effect on the Gaussian scale factor, σ, can be illustrated The dependence of the SSR’s enhancement effect on the Gaussian scale factor, σ , can be as follows. When the value of σ is large, the color fidelity of the processed image is good. However, illustrated as follows. When the value of σ is large, the color fidelity of the processed image is details in the image become difficult to identify. Conversely, when this value is reduced such that the good. However, details in the image become difficult to identify. Conversely, when this value is details are highlighted after enhancement, the color fidelity of the processed image is poor. In particular, reduced such that the details are highlighted after enhancement, the color fidelity of the processed in areas where the local color changes significantly, there is a “halo” effect after SSR enhancement image is poor. In particular, in areas where the local color changes significantly, there is a “halo” (Figure2b). effect after SSR enhancement (Figure 2(b)).

(a) (b)

FigureFigure 2. Illustration 2. Illustration of the of the limita limitationstions of the of the single-scale single-scale Retinex Retinex (SSR) (SSR) algorithm. algorithm. (a) (Inputa) Input image. image. (b)( Resultb) Result of SSR of SSR enhancement. enhancement. The The “halo” “halo” effect eff ectobserved observed in areas in areas with with significant significant changes changes in local in local colorcolor is marked is marked in red. in red.

To Toaddress address the the defects defects of ofa traditional a traditional Retinex Retinex algorithm, algorithm, we we proposed proposed the the use use of ofGaussian Gaussian pyramidpyramid transformation transformation and and an animproved improved bilateral bilateral filtering filtering function function that that increases increases edge edge retention retention andand improves improves noise noise removal removal to to replace replace the the Gaussian Gaussian filterfilter function.function. TheThe mathematicalmathematical model model for for this thisimproved improved bilateral bilateral filter filter function function is deducedis deduced by improvingby improving the spatialthe spatial kernel kernel function function and usingand a usingrotating a rotating window window function function to determine to determine the pixel the pixel difference difference scale parameters.scale parameters. TheThe operation operation of of our techniquetechnique proceeds proceeds as follows.as follows. The colorThe spacecolor ofspace the imageof the is transformedimage is transformedfrom RGB tofrom HSI RGB in order to HSI to separate in order brightness to separate from brightness color information. from color Improved information. bilateral Improved filtering of bilateralRetinex filtering enhancement of Retinex is subsequently enhancement performed is subsequently on the intensity performed component on the intensity of the HSI component image in orderof theto HSI obtain image a reflection in order imageto obtain whose a reflection edges are im maintainedage whose andedges are are una maintainedffected by changesand are unaffected in brightness. by Thischanges enhanced in brightness. image is thenThis returnedenhanced to image the RGB is then color returned space, following to the RGB which color a Gaussian–Laplacianspace, following whichpyramid a Gaussian–Laplacian is generated by successively pyramid is applyinggenerated a Gaussianby successively filter to applying an input a image Gaussian and scalingfilter to downan inputthe image resolution and ofscaling the resulting down the image. resolution Finally, ofsub-images the resulting in theimage. pyramid Finally, are sub-images enhanced usingin the the pyramidimproved are Retinexenhanced algorithm using the for multi-scaleimproved Retine processingx algorithm before Laplacianfor multi-scale reconstruction processing is appliedbefore to Laplaciancomplete reconstruction image enhancement. is applied A flowchart to complete detailing image the operationenhancement. of this A improved flowchart Retinex detailing algorithm the operationis shown of inthis Figure improved3. Retinex algorithm is shown in Figure 3. Algorithms 2019, 12, 258 5 of 14

Algorithms 2019, 12, x FOR PEER REVIEW 5 of 15

Figure 3. Flowchart detailing image-enhancement using the improved Retinex algorithm.

3. Improved Bilateral Filtering Function

3.1. Traditional Bilateral Filtering Functions 3.1. Traditional Bilateral Filtering Functions In a traditional Retinex algorithm, the central-surround filter function is typically a Gaussian In a traditional Retinex algorithm, the central-surround filter function is typically a Gaussian function. Hence, once operation of the algorithm is complete, the edges of the image are blurred, and function. Hence, once operation of the algorithm is complete, the edges of the image are blurred, and details in the image disappear. To address this, in our algorithm we have adopted a bilateral filter [22] details in the image disappear. To address this, in our algorithm we have adopted a bilateral filter as an “edge maintenance–noise reduction” filter, which is defined mathematically as [22] as an “edge maintenance–noise reduction” filter, which is defined mathematically as P wd(m, n)wr(m, n) f (m, n) wd (m,n)wr (m,n) f (m,n) (m,n)Ωp,i,j (m,n∈)∈Ω f 0(' i, j) = Pp,i, j , (6) f (i, j) = w (m n)w (m n) w (dm,,n)w r(m, n) (6) (m,n) Ωp,i,j d r ∈ ∈Ω (m,n) p,i, j , where where (i m)2 + (j n)2 wd(m, n) = exp( − − ) (7) − 22σ2 2 (i − m) +d ( j − n) w (m,n) = exp(− ) (7) d σ 2 2 ( f (i, j) 2f (md , n)) wr(m, n) = exp( − ). (8) − 2σ2 ( f (i, j) −r f (m,n))2 ( ) ( ) w (m,n) = exp(− ) f m, n and f 0 i, j are respectivelyr the input and filteredσ images,2 wd and ws are kernel functions(8) in 2 r the space and range domains, and σd and σr are spatial distance and pixel. difference scale parameters, respectively. In addition, Ωp,i,j is the set of pixel points in the input image, with (i, j) defined as the ' centerf and(m,2np)+ and1 as thef ( radius,i, j) are and respectivelyp is the radius the input of the and filter. filtered With this images, algorithm, wd aand larger ws filter are radiuskernel corresponds to a larger filter interval. However, calculationσ is madeσ correspondingly more complex. functions in the space and range domains, and d and r are spatial distance and pixel From (6), (7), and (8), it can be noted that the efficiency of a bilateral filter is determined by σd and Ω σdifferencer. Experimentation scale parameters, has revealed respectively. [23] that In increasing addition, the valuep,i, j is oftheσ dsetleads of pixel to a blurrierpoints in image the input after filtering. Similarly, increasing the value of σr increases the size of the 2p + 1 pixel interval and reduces image, with (i, j) defined as the center and 2 p +1 as the radius, and p is the radius of the the correlation between pixels. In typical situations, the values of these two parameters are manually adjustedfilter. With according this algorithm, to the requirements a larger filter of the radius particular corresponds application. to a Hence,larger infilter this interval. paper, we However, propose an adaptive adjustment method for selecting the values of σd and σr. calculation is made correspondingly more complex. 3.2. ImprovedFrom (6), Spatial (7), and Domain (8), it Kernelcan be Function noted that the efficiency of a bilateral filter is determined by σ d σ σ and Tor improve. Experimentation the efficiency has of revealed the wd kernel [23] function,that increasing we considered the value the influenceof d leads of di tofferent a blurrier pixels on noise smoothing. In the spatial domain, the 2p + 1 filter window parameter is set with (i, j) defined image after filtering. Similarly, increasing the value of σ increases the size of the 2 p +1 pixel as the central filter point. Inspection of (7) shows that pixelsr located closer to the central filter point interval and reduces the correlation between pixels. In typical situations, the values of these two parameters are manually adjusted according to the requirements of the particular application. σ Hence, in this paper, we propose an adaptive adjustment method for selecting the values of d and σ r . Algorithms 2019, 12, 258 6 of 14 have a greater influence on noise smoothing in the spatial domain. Hence, to address this, we defined our improved spatial domain kernel function as follows: s (i m)2 + (j n)2 (i m)2 + (j n)2 w (m, n) = (1 − − ) exp( − − ) (9) d − 2p + 1 · − 2 2σd

p where the spatial distance difference scale parameter is defined as σd = 2 , since 95% of possible outcomes from the Gaussian function are concentrated in the [ 2σ , 2σ ] range. Thus, in essence, σ is − d d d a spatial standard deviation, and after the experiment, we chose p = 5.

3.3. Improved Selection of Pixel Difference Scale Parameters

In the Ωp,i,j point set, which is used as a sample to find the standard deviation (σn) of pixel intensities in an image, the gray value of each pixel is defined as [a(2p 1,2p 1), a(2p 1,2p) ... a(2p+1,2p+1)]. − − − However, unlike with the spatial domain, selection of this standard deviation as the scale parameter leads to an excessively large value for σd, as shown in [24]. Hence, a rotating window function for determining the value of σ is proposed as follows. A (2p + 1) (p + 1) rectangular window centered d × on the (x, y) pixel is selected, with an offset angle defined as 2π/K. The standard deviation of pixels in this rectangular window is then calculated, following which the window is rotated by the offset angle. This calculation and rotation process is successively iterated for a complete revolution (i.e., K rotations should be completed). Finally, the minimum pixel standard deviation min(σn) computed is selected as σd; the window in which this value was calculated is used in subsequent filtering. σd is defined as follows: n o σd = min(σn) = min std[ f (x, y)2πk/K] (10) k=0,1,...K 1 − where f (x, y)2πk/K is the value of a pixel at a given location in the window, K = 8.

4. Gaussian–Laplacian Multi-Scale Pyramid Algorithm Gaussian–Laplacian pyramid [24] generation is a multi-scale image decomposition method with a sampling rate of 2, i.e., the resolution of a resulting image is halved in each successive decomposition. The resulting pyramid consists of layers of decomposed images decreasing in resolution from bottom to top, with the original image defined as the base level of the pyramid (level 0). In detail, beginning from level 0, decomposition consists of convolution of the input image with a Gaussian low-pass filter followed by a down-sample operation in order to halve the horizontal and vertical resolution of the processed image. The filtered down-sampled image is subsequently defined as the image at level 1 of the pyramid (F1). This operation is repeated, as defined by the number of levels required in the Gaussian pyramid. Mathematically, this process is defined as

X2 X2 Fl = G(x, y)Fl 1(2i + m, 2j + n), (11) − m= 2 n= 2 − − where G(x, y) is a low-pass filter, l is the level of the pyramid, N is the number of layers in the pyramid, 0 l N, i R , j C , and R , C represent the number of rows and columns in F and N = 3, ≤ ≤ ≤ l ≤ l l l l respectively. As each image in the pyramid, Fl, is of a different resolution, iterative image generation and boundary closure may be affected by noise and dispersed sampling. To ensure the structural information in the highest resolution image is retained in subsequent pyramid levels, image enhancement is required. To do this, we employed the improved Retinex algorithm to recover details in images lost in processing.

Enhanced pyramid images are subsequently termed Fl0. To complete the image enhancement process, the modified Gaussian pyramid is reconstructed to yield a single image with the same resolution as the original input. Reconstruction is completed by Algorithms 2019, 12, x FOR PEER REVIEW 7 of 15

where G(x, y) is a low-pass filter, l is the level of the pyramid, N is the number of layers in the ≤ ≤ ≤ ≤ pyramid, 0 l N,i Rl , j Cl , and Rl ,Cl represent the number of rows and columns in Fl and N = 3 , respectively.

As each image in the pyramid, Fl , is of a different resolution, iterative image generation and boundary closure may be affected by noise and dispersed sampling. To ensure the structural information in the highest resolution image is retained in subsequent pyramid levels, image enhancement is required. To do this, we employed the improved Retinex algorithm to recover ' details in images lost in processing. Enhanced pyramid images are subsequently termed Fl .

AlgorithmsTo complete2019, 12, 258 the image enhancement process, the modified Gaussian pyramid is reconstructed7 of 14 to yield a single image with the same resolution as the original input. Reconstruction is completed by generating a Laplacian pyramid from the image-enhanced Gaussian pyramid. This generatingreconstruction a Laplacian process is pyramid detailed from as follows: the image-enhanced Gaussian pyramid. This reconstruction process is detailed as follows: ' 1) The topmost image in the Gaussian pyramid, Fl+1 , is interpolated to obtain an image ' termed D , which has the same resolution as the image in the preceding layer ( F ). l F l D (1) The topmost image in the Gaussian' pyramid, l0+1, is interpolated to obtain an image termed l, 2) Dl is subtracted from Fl , with the difference, FN , stored in the Laplace residual set. which has the same resolution as the image' in the preceding' layer (Fl0). FN is subsequently added to Fl to yield FN , which is interpolated for reconstruction of (2) D is subtracted from F , with the difference, F , stored in the Laplace residual set. F is subsequently l the image in the l0preceding layer. N N F F added3) Computation to l0 to yield ofN0 the, which Laplacian is interpolated and image for reconstruction interpolation of continues the image initeratively, the preceding until layer. the (3) Computationreconstructed of the image Laplacian is of and the image same interpolationresolution as continues the original iteratively, input. Using until the the reconstructed terminology imagedescribed is of the sameabove, resolution the process as the can original be described input. Usingas (12). the A terminology Gaussian–Laplacian described multi-scale above, the processpyramid can be from described intermediate as (12). Aresults Gaussian–Laplacian of different pyramid multi-scale levels pyramidis presented from in intermediate Figure 4. In resultsour of experiment, different pyramid the sampling levels is value presented of the in pyramid Figure4. was In our 3. experiment, the sampling value of the pyramid was 3. ( = ' − FN =Fl Dl FN Fl0 Dl  ' − ' .(12) (12) F F0==F FN++F Fl0  N N N l .

Figure 4. A Gaussian–Laplacian multi-scale pyramid from intermediate results of didifferentfferent pyramidpyramid level.level. (Column(Column 1) 1) The The Gaussian Gaussian pyramid pyramid image image of di ffoferent different pyramid pyramid levels. (Columnlevels. (Column 2) The enhanced 2) The pyramid images of different pyramid levels. (Column 3) Processed images of Laplace reconstruction; the final image of the reconstruction is in Column 3, first picture.

5. Experiments and Results To verify the effectiveness of our algorithm, we compared its operation in experiments with those of the original SSR, MSRCR, Non-local retinex(NLR) [25], and Illumination Invariant Face Recognition (INF) [26,27] algorithms based on a database [28] evaluating their performance using objective and subjective measures of image perception. The proposed operator was implemented in MATLAB 2014b on a computer running the Windows 7 (64 bit) operating system. Algorithms 2019, 12, x FOR PEER REVIEW 8 of 15

enhanced pyramid images of different pyramid levels. (Column 3) Processed images of Laplace reconstruction; the final image of the reconstruction is in Column 3, first picture.

5. Experiments and Results To verify the effectiveness of our algorithm, we compared its operation in experiments with those of the original SSR, MSRCR, Non-local retinex(NLR) [25], and Illumination Invariant Face Recognition (INF) [26,27] algorithms based on a database [28] evaluating their performance using Algorithmsobjective2019 and, 12 subjective, 258 measures of image perception. The proposed operator was implemented8 of 14 in MATLAB 2014b on a computer running the Windows 7 (64 bit) operating system. 5.1. Results of Improved Bilateral Filtering 5.1. Results of Improved Bilateral Filtering In this experiment, we selected the database [29] which includes 24 .png images (typically In this experiment, we selected the database [29] which includes 24 .png images (typically 768 × 768 512 or similar size) for processing. Gaussian white noise with mean value u = 0 and the standard 512× or similar size) for processing. Gaussian white noise with mean value u = 0 and the standard deviation σ = 0.01, σ = 0.02,σ = 0.05,σ = 0.1, and σ = 0.2 was added to these images—an example of deviation σ = 0.01, σ = 0.02 ,σ = 0.05 ,σ = 0.1, and σ = 0.2 was added to these images—an which is shown in Figure5—to degrade their quality such that the performance of the di fferent filtering example of which is shown in Figure 5—to degrade their quality such that the performance of the algorithms could be observed. We note that with the original bilateral filter, edges in the images were different filtering algorithms could be observed. We note that with the original bilateral filter, edges lost, as they had all been smoothed. In contrast, Gaussian noise is few perceptible following operation in the images were lost, as they had all been smoothed. In contrast, Gaussian noise is few perceptible of our algorithm, as shown in Figure5d. Moreover, the image performance following operation of following operation of our algorithm, as shown in Figure 5(d). Moreover, the image performance our algorithm is a better improvement, shown in Figure5d. The edges in the image are retained after following operation of our algorithm is a better improvement, shown in Figure 5(d). The edges in the denoising, as shown in Figure6c. image are retained after denoising, as shown in Figure 6(c).

(a) (b) (c) (d) Algorithms 2019, 12, x FOR PEER REVIEW 9 of 15 FigureFigure 5. 5.Comparison Comparison of resultsof results of bilateral of bilateral filtering filterin (oneg image(one ofimage the database: of the database: “kodim08”). “kodim08”). (a) Original (a) image.Original (b )image. Image with(b) Image Gaussian with noise Gaussian (σ = 0.01 noise). (c )( Imageσ = 0 processed.01). (c) Image using originalprocessed bilateral using filtering original algorithm.bilateral filtering (d) Image algorithm. processed (d) using Image improved processed bilateral using improved filtering algorithm. bilateral filtering algorithm.

(a) (b) (c)

FigureFigure 6. 6.Magnified Magnified view view of of the the e ffeffectsects of of bilateral bilateral filtering filtering focusing focusing on on the the region region defined defined by by the the red red boxbox in in Figure Figure4.( 4. a()a Image) Image with with Gaussian Gaussian noise. noise. ( b(b)) Image Image processed processed using using original original bilateral bilateral filtering filtering algorithm.algorithm. ( c()c) Image Image processed processed using using improved improved bilateral bilateral filtering filtering algorithm. algorithm.

ForFor additional additional verification, verification, we we compared compared the twothe filteringtwo filtering algorithms algorithms with respectwith respect to Peak to Signal Peak toSignal Noise to Ratio Noise (PSNR), Ratio Mean (PSNR), Squared Mean Error Squared (MSE), Error Structural (MSE), Similarity Structural (SSIM) Similarity [30], Multi-scale(SSIM)[30], structuralMulti-scale similarity structural (MS-SSIM) similarity [31 (MS-SSIM)[31],], Visual Information Visual FidelityInformation (VIF) Fidelity [32], Full (VIF)[32], Refer- ence Full Visual Refer- Informationence Visual FidelityInformation (FR-VIF) Fidelity [33 ],(FR-VIF)[33], and visual signal-to-noiseand visual signal-to-noise ratio (VSNR) ratio [34 (VSNR)[34],], shown in shown Table1 .in ThisTable MATLAB 1. This MATLAB wrapper code wrapper is presented code is inpresented [35]. In this in [35]. case, In the this objective case, the measure objective scores measure are directly scores averaged;are directly we averaged; highlight thewe highlight best result the of thebest two result algorithms of the two with algorithms boldface. with We noteboldface. that theWe intensity note that ofthe noise intensity in the imagesof noise following in the images processing following is higher processing with the is traditional higher with bilateral the traditional filtering algorithm bilateral thanfiltering it is withalgorithm our modified than it is solution, with our based modified on a comparisonsolution, based of the on values a comparison of seven of Image the values Quality of Assessmentseven Image (IQA) Quality measures. Assessment (IQA) measures.

Table 1. Evaluation of bilateral filtering algorithms based database [29] (24 images).

Gaussian white noise algorithm PSNR MSE SSIM MSSIM VIF FR-VIF VSNR σ = Original algorithm 22.493 376.772 0.542 0.912 0.361 0.231 18.221 0.01 Modified algorithm 27.034 129.231 0.799 0.964 0.566 0.379 26.314 σ = 0.02 Original algorithm 20.505 587.780 0.488 0.898 0.327 0.205 17.375 Modified algorithm 28.414 93.752 0.928 0.993 0.794 0.463 34.386 σ = Original algorithm 18.014 1035.936 0.451 0.881 0.284 0.182 16.101 0.05 Modified algorithm 27.220 123.450 0.946 0.991 0.691 0.469 30.743 σ = 0.1 Original algorithm 16.222 1560.708 0.431 0.869 0.257 0.170 15.226 Modified algorithm 27.507 115.525 0.970 0.995 0.713 0.502 32.404 Original algorithm 16.127 1610.014 0.418 0.856 0.249 0.165 14.992 σ = 0.2 Modified algorithm 22.286 481.923 0.860 0.952 0.446 0.365 21.867

5.2. Subjective Analysis of Image Enhancement Algorithms To verify the effectiveness of our improved image enhancement technique, we conducted experiments comparing its performance to those of the SSR, MSRCR, NLR, and INF algorithms based on a database [28] with 4322 images, which is a real foggy test dataset provided by Gac Research Institute. The partial results of these experiments are shown in Figure 6, with Column 1 depicting original images, Column 2 depicting images processed using the SSR algorithm, Column 3 depicting images processed using the MSRCR algorithm, Column 4 depicting images processed using the NLR algorithm, Column 5 depicting images processed using the INF algorithm, and Column 6 depicting images processed using our improved image enhancement technique. There are degradation and dark regions in the original images in Figure 7, which are characteristic of uneven lighting, making them suitable for image enhancement. We note that with all five algorithms considered, most of the foreground details are preserved, and the color in the resulting images is enhanced following processing. However, with the SSR, MSRCR, and NLR Algorithms 2019, 12, 258 9 of 14

Table 1. Evaluation of bilateral filtering algorithms based database [29] (24 images).

Gaussian Algorithm PSNR MSE SSIM MSSIM VIF FR-VIF VSNR White Noise Original algorithm 22.493 376.772 0.542 0.912 0.361 0.231 18.221 σ = 0.01 Modified algorithm 27.034 129.231 0.799 0.964 0.566 0.379 26.314 Original algorithm 20.505 587.780 0.488 0.898 0.327 0.205 17.375 σ = 0.02 Modified algorithm 28.414 93.752 0.928 0.993 0.794 0.463 34.386 Original algorithm 18.014 1035.936 0.451 0.881 0.284 0.182 16.101 σ = 0.05 Modified algorithm 27.220 123.450 0.946 0.991 0.691 0.469 30.743 Original algorithm 16.222 1560.708 0.431 0.869 0.257 0.170 15.226 σ = 0.1 Modified algorithm 27.507 115.525 0.970 0.995 0.713 0.502 32.404 Original algorithm 16.127 1610.014 0.418 0.856 0.249 0.165 14.992 σ = 0.2 Modified algorithm 22.286 481.923 0.860 0.952 0.446 0.365 21.867

5.2. Subjective Analysis of Image Enhancement Algorithms To verify the effectiveness of our improved image enhancement technique, we conducted experiments comparing its performance to those of the SSR, MSRCR, NLR, and INF algorithms based on a database [28] with 4322 images, which is a real foggy test dataset provided by Gac Research Institute. The partial results of these experiments are shown in Figure6, with Column 1 depicting original images, Column 2 depicting images processed using the SSR algorithm, Column 3 depicting images processed using the MSRCR algorithm, Column 4 depicting images processed using the NLR algorithm, Column 5 depicting images processed using the INF algorithm, and Column 6 depicting images processed using our improved image enhancement technique. There are degradation and dark regions in the original images in Figure7, which are characteristic of uneven lighting, making them suitable for image enhancement. We note that with all five algorithms considered, most of the foreground details are preserved, and the color in the resulting images is enhanced following processing. However, with the SSR, MSRCR, and NLR algorithms, the “halo” effect can also be observed, as seen most obviously in Figure7(a-2), Figure7(d-3), Figure7(e-3), Figure7(f-3), and Figure7(f-4). The MSRCR and INL algorithms darken the image and lose the details, as shown in Figure7(a-3–f-3) and Figure7(a-5–f-5). While details in the blurred areas of the original images can be enhanced using the other four algorithms, there is a deviation between the color in the input and output images. Hence, the color of the imaged object is not restored, and the visual representation of this object is thus distorted. In addition, the MSRCR and NLR algorithm was not effective in removing the fog from the foreground of the images, as shown in Figure7(a-4–f-4). In contrast, the texture and color contrast of images processed using our technique are improved, and enhancement is effective for images in different low light situations. In addition, following processing, the deviation of color between the input and output images is small. As color fidelity is better, the overall visual representation of the image is retained. Finally, details in the image are more clearly identified, as can be seen from Figure7(a-6–f-6). For a more quantitative representation of the effect of the different algorithms, histograms of the grayscale intensity of the images in Row (a) of Figure6 are shown in Figure8. From this, it can be seen that the distribution of pixel intensities is more uniform and the average grayscale intensity of the image is reduced. This is indicative of the fog removal effect obtained with our algorithm; fog is observed as a brighter area in an image, with a correspondingly large grayscale intensity. A reduction in the average grayscale intensity of an image thus suggests the removal of fog. Algorithms 2019, 12, x FOR PEER REVIEW 11 of 15

Algorithms 2019, 12, 258 10 of 14

(a-1) (a-2) (a-3) (a-4) (a-5) (a-6)

(b-1) (b-2) (b-3) (b-4) (b-5) (b-6)

(c-1) (c-2) (c-3) (c-4) (c-5) (c-6)

(d-1) (d-2) (d-3) (d-4) (d-5) (d-6)

(e-1) (e-2) (e-3) (e-4) (e-5) (e-6)

(f-1) (f-2) (f-3) (f-4) (f-5) (f-6)

FigureFigure 7. 7.Subjective Subjective comparison comparisonof ofthe the performanceperformance of didifferentfferent enhancement al algorithmsgorithms (six (six images images ofof the the dataset). dataset). (Column (Column 1) 1) Original Original images.images. (Column(Column 2) Images processed processed using using the the SSR SSR algorithm. algorithm. (Column(Column 3) Images3) Images processed processed using using the multi-scale the multi-scale Retinex Retinex with color with restoration color restoration (MSRCR) (MSRCR) algorithm. (Columnalgorithm. 4) Images (Column processed 4) Images using processed NLR algorithm. using NLR (Column algorithm. 5) Images (Column processed 5) Images using processed INF algorithm. using (ColumnINF algorithm. 6) Images (Column processed 6) Images using processed our algorithm. using our algorithm.

For a more quantitative representation of the effect of the different algorithms, histograms of the grayscale intensity of the images in Row (a) of Figure 6 are shown in Figure 8. From this, it can be seen that the distribution of pixel intensities is more uniform and the average grayscale intensity Algorithms 2019, 12, x FOR PEER REVIEW 12 of 15 of the image is reduced. This is indicative of the fog removal effect obtained with our algorithm; fog Algorithms 2019, 12, 258 11 of 14 is observed as a brighter area in an image, with a correspondingly large grayscale intensity. A reduction in the average grayscale intensity of an image thus suggests the removal of fog.

(a) (b)

(c) (d)

(e) (f)

FigureFigure 8. 8. Grayscale-basedGrayscale-based comparison comparison of of the the performa performancence of of different different enha enhancementncement algorithms. algorithms. HistogramsHistograms were were computed computed using using the the images images in in Row Row ( (aa)) of of Figure Figure 77..( (aa)) Original Original image. image. ( (bb)) Image Image processedprocessed using using the the SSR algorithm. ( (cc)) Image Image processed processed using using the the MSRCR algorithm. ( (dd)) Image Image processedprocessed usingusing thethe NLR NLR algorithm. algorithm. (e) Image(e) Image processed processed using using the INF the algorithm. INF algorithm. (f) Image (f processed) Image processedusing our using algorithm. our algorithm.

5.3.5.3. Objective Objective Evaluation Evaluation of of Image Image Enhancement Enhancement Algorithms Algorithms ForFor objective evaluationevaluation ofof the the di differentfferent image imag enhancemente enhancement algorithms, algorithms, we calculatedwe calculated the Peakthe PeakSignal Signal to Noise to Noise Ratio Ratio (PSNR), (PSNR), Mean Mean Squared Squared Error (MSE),Error (MSE), SSIM, MS-SSIM,SSIM, MS-SSIM, VIF, Information VIF, Information content weighting Structural SIMilarity (IWSSM) [35], and VSNR of the images in the database, as shown in content weighting Structural SIMilarity (IWSSM) [35], and VSNR of the images in the database, as Algorithms 2019, 12, 258 12 of 14

Table2. In this case, the correlation scores are directly averaged and we highlight the best result with boldface, as each of these parameters characterizes a different aspect of the image.

Table 2. Objective comparison of the performance of different enhancement algorithms based on information Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), SSIM, MS-SSIM, VIF, IWSSIM, and VSNR.

Algorithm PSNR MSE SSIM MSSIM VIF IWSSIM VSNR OI N/A 0.000 1.000 1.000 1.000 1.000 N/A SSR 18.584 1097.904 0.734 0.761 1.192 0.728 7.194 MSRCR 12.034 4438.279 0.547 0.761 0.826 0.731 12.376 NLR 13.751 3424.179 0.767 0.799 0.867 0.662 6.932 INF 10.357 6132.800 0.632 0.480 0.355 0.736 7.956 Our 24.071 333.602 0.851 0.914 0.994 0.894 13.224

Table2 shows that images processed using our algorithm had the best values, suggesting that more high-frequency information is processed using our technique, and details in the image are subsequently enhanced. Similarly, the PSNR and SSIM values were maximized using our algorithm, indicating that the contrast of the images is improved with our technique, and details are more identifiable. Then, MSSIM and VIF value of the image were maximized with our technique, indicating its improved reconstruction quality. Finally, the other values also improved using our algorithm, demonstrated that our algorithm leads to significant and consistent performance of image enhancement.

6. Conclusions In this paper, we proposed a new Retinex-based image enhancement algorithm to address the problem of image blurring caused by uneven illumination. The technique features an improved bilateral filtering function, addressing the problem of an over-enhanced luminance image and the associated loss of texture. In addition, we combine our modified Retinex model with Gaussian pyramid down-sampling for multi-scale processing, in order to eliminate blurring problems and increase the contrast of the final image. We compared the performance of our technique with those of the SSR, MSRCR, NLR, and INF algorithms in experiments, with the results highlighting its improved image enhancement. However, the combination of the improved bilateral filtering function and the Gaussian pyramid transformation increases the time complexity of the technique, a problem which will be addressed in future studies.

Author Contributions: Conceptualization, C.L., H.Z. and W.C.; methodology, C.L. and H.Z.; validation, C.L., H.Z. and W.C.; formal analysis, C.L.; writing—original draft preparation, C.L. and H.Z.; writing—review and editing, C.L., H.Z. and W.C.; funding acquisition, H.Z. Funding: This research was supported in part by the National Natural Science Foundation of China under Grant 51179074; the Natural Science Foundation of Fujian Province under Grant 2018J01495; the Young and Middle-Aged Teachers Projects of Fujian Province under Grant JAT170507, JAT170507(p); the Putian Science and Technology bureau project under Grant 2018RP4002; in part by Modern Precision Measurement and Laser Nondestructive Testing under Grant B17119; and the Doctoral Research Start-up Fund of Jimei University under Grant ZQ2013007. Acknowledgments: The authors would like to thank Sun Hui and Juntao Chen for their help. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Chang, H.; Ng, M.K.; Wang, W.; Zeng, T. Retinex image enhancement via a learned dictionary. Opt. Eng. 2015, 54, 013107. [CrossRef] 2. Soori, U.; Yuen, P.W.; Han, J.W.; Ibrahim, I.; Chen, W.; Hong, K.; Merfort, C.; James, D.B.; Richardson, M.A. Target recognitions in multiple-camera closed-circuit television using color constancy. Opt. Eng. 2013, 52, 047202. [CrossRef] Algorithms 2019, 12, 258 13 of 14

3. Yoon, J.; Choi, J.; Choe, Y. Efficient image enhancement using sparse source separation in the Retinex theory. Opt. Eng. 2017, 56, 113103. [CrossRef] 4. Land, E.H. Recent advances in retinex theory and some implications for cortical computations: Color vision and the natural image. Proc. Natl. Acad. Sci. USA 1989, 80, 5163–5169. [CrossRef][PubMed] 5. Land, E.H. An alternative technique for the computation of the designator in the retinex theory of color vision. Proc. Natl. Acad. Sci. USA 1986, 83, 3078–3080. [CrossRef] 6. Land, E.H. Recent Advances in Retinex Theory; Vision Research: Wayne, NJ, USA, 1986; Volume 26, pp. 7–21. 7. Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. Properties and Performance of a Center/Surround Retinex. IEEE Trans. 1997, 6, 451–462. [CrossRef] 8. Jobson, D.; Rahman, Z.; Woodell, G. A multiscale retinex for bridging the gap between color images and the human observation of scenes. Image Proc. IEEE Trans. 1997, 6, 965–976. [CrossRef] 9. Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. Retinex image processing: Improved fidelity to direct visual observation. In Proceedings of the Color and Imaging Conference, Scottsdale, AZ, USA, 19–22 November 1996; No. 1. pp. 124–125. 10. Jiang, B.; Rahman, Z. Runway hazard detection in poor visibility conditions. Proc. SPIE 2012, 8300, 83000H. 11. Zavalaromero, O.; Meyerbaese, A.; Meyerbaese, U. Multiplatform GPGPU implementation of the active contours without edges algorithm. Proc. SPIE 2012, 8399, 1083990E. 12. Rahman, Z.; Woodell, G.A.; Jobson, D.J. Retinex image enhancement: Application to medical images. In Proceedings of the NASA Workshop on New Partnerships in Medical Diagnostic Imaging, Greenbelt, MD, USA, 17–18 July 2001; pp. 1–23. 13. Jobson, D.J.; Rahman, Z.; Woodell, G. Spatial aspect of color and scientific implications of retinex image processing. Vis. Inf. Process. X 2001, 4388, 117–128. 14. Wang, Y.X.; Diao, M.; Han, C. Underwater imaging enhancement algorithm under conventional light source based on iterative histogram equalization. Acta Photonica Sin. 2018, 47, 1101002. [CrossRef] 15. Woodell, G.; Rahman, Z.; Jobson, D.J.; Hines, G. Enhanced images for checked and carry-on baggage and cargo screening. Proc. SPIE 2004, 5403, 582. 16. Rahman, Z.; Jobson, D.J.; Woodell, G.A. Retinex processing for automatic image enhancement. J. Electron. Imaging 2004, 13, 100–111. 17. Loh, Y.P.; Liang, X.; Chan, C.S. Low-light image enhancement using Gaussian Process for features retrieval. Signal Process. Image Commun. 2019, 74, 175–190. [CrossRef] 18. Tao, F.; Yang, X.; Wu, W.; Liu, K.; Zhou, Z.; Liu, Y. Retinex-based image enhancement framework by using region covariance filter. Soft Comput. 2018, 22, 1399–1420. [CrossRef] 19. Karacan, L.; Erdem, E.; Erdem, A. Structure-preserving image smoothing via region covariances. ACM Trans. Graph. (TOG) 2013, 32, 176. [CrossRef] 20. Zhang, Y.; Zheng, J.; Kou, X.; Xie, Y. Night View Road Scene Enhancement Based on Mixed Multi-scale Retinex and Fractional Differentiation. In Proceedings of the International Conference on Brain Inspired Cognitive Systems, Xi’an, China, 7–8 July 2018; Springer: Cham, Switzerland; pp. 818–826. 21. Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [CrossRef] 22. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision, Bombay, India, 4–7 January 1998; Volume 98, p. 1. 23. Shi, K.Q.; Wei, W.G. Image denoising method of surface defect on cold rolled aluminum sheet by bilateral filtering. Surf. Technol. 2018, 47, 317–323. 24. Burt, P.; Adelson, E. The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 1983, 31, 532–540. [CrossRef] 25. Zosso, D.; Tran, G.; Osher, S.J. Non-local retinex—A Unifying Framework and Beyond. SIAM J. Imaging Sci. 2015, 8, 787–826. [CrossRef] 26. Zosso, D.; Tran, G.; Osher, S. A unifying retinex model based on non-local differential operators. Comput. Imaging XI 2013, 8657, 865702. 27. The INface Toolbox v2.0 for Illumination Invariant Face Recognition. Available online: https://www.mathworks. com/matlabcentral/fileexchange/26523-the-inface-toolbox-v2-0-for-illumination-invariant-face-recognition (accessed on 10 October 2019). Algorithms 2019, 12, 258 14 of 14

28. Liu, Y.; Zhao, G.; Gong, B.; Li, Y.; Raj, R.; Goel, N.; Tao, D. Improved techniques for learning to dehaze and beyond: A collective study. arXiv 2018, arXiv:1807.00202. 29. Kodak Image Dataset. Available online: http://www.cs.albany.edu/~{}xypan/research/snr/Kodak.html (accessed on 15 October 2019). 30. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Proc. 2004, 13, 600–612. [CrossRef][PubMed] 31. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multi-scale structural similarity for image quality assessment; Invited Paper. In Proceedings of the IEEE Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. 32. Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process. 2006, 15, 430–444. [CrossRef] 33. Chandler, D.M.; Hemami, S.S. VSNR: A wavelet-based visual signal-to-noise ratio for natural images. IEEE Trans. Image Process. 2007, 16, 2284–2298. [CrossRef] 34. Image-Quality-Tools. Available online: https://github.com/sattarab/image-quality-tools/tree/master/metrix_ mux (accessed on 15 October 2019). 35. Wang, Z.; Li, Q. Information Content Weighting for Perceptual Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 1185–1198. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).