sensors

Article GLAGC: Adaptive Dual-Gamma Function for Image Illumination Perception and Correction in the Wavelet Domain

Wenyong Yu 1 , Haiming Yao 1, Dan Li 1, Gangyan Li 2 and Hui Shi 2,*

1 School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China; [email protected] (W.Y.); [email protected] (H.Y.); [email protected] (D.L.) 2 School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430074, China; [email protected] * Correspondence: [email protected]

Abstract: Low-contrast or uneven illumination in real-world images will cause a loss of details and increase the difficulty of pattern recognition. An automatic image illumination perception and adaptive correction algorithm, termed as GLAGC, is proposed in this paper. Based on Retinex theory, the illumination of an image is extracted through the discrete wavelet transform. Two features that characterize the image are creatively designed. The first feature is the spatial distribution feature, which is applied to the adaptive gamma correction of local uneven lighting. The other feature is the global statistical luminance feature. Through a training set containing images with various illuminance conditions, the relationship between the image level and the feature is estimated under the maximum entropy criterion. It is used to perform adaptive gamma correction on global low illumination. Moreover, smoothness preservation is performed in the high-frequency subband to preserve edge smoothness. To eliminate low-illumination noise after  wavelet reconstruction, the adaptive stabilization factor is derived. Experimental results demonstrate  the effectiveness of the proposed algorithm. By comparison, the proposed method yields comparable Citation: Yu, W.; Yao, H.; Li, D.; Li, or better results than the state-of-art methods in terms of efficiency and quality. G.; Shi, H. GLAGC: Adaptive Dual-Gamma Function for Image Keywords: adaptive gamma correction; image illumination correction; wavelet transforms; image Illumination Perception and enhancement; illumination perception Correction in the Wavelet Domain. Sensors 2021, 21, 845. https:// doi.org/10.3390/s21030845 1. Introduction Received: 8 December 2020 Accepted: 25 January 2021 Uneven or insufficient illumination will cause the contrast of an image to be too low, Published: 27 January 2021 making it difficult to observe the details of the image. We usually pursue for the enhance- ment results that local variation is obvious while the global variation is in accordance with

Publisher’s Note: MDPI stays neutral the original intensity, which is denoted as naturalness preservation. Researchers have with regard to jurisdictional claims in proposed many enhancement methods to make these images have a more pleasing visual published maps and institutional affil- effect or to obtain high-visibility effects. iations. modulation schemes, such as statistics-based method histogram equalization (HE), directly adjust the pixel intensity of the image to achieve enhancement. This kind of method may cause artifacts and the loss of naturalness. The nonlinear gamma correction approach uses different mapping curves to achieve excellent performance in complex lighting conditions [1], but the parameters need manual design with prior knowledge, and Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. the spatial information is not considered [2] when operating on each pixel. This article is an open access article Converting pixel information to other domains can yield more internal information of distributed under the terms and the image, such as discrete Fourier transform, discrete cosine transform (DCT), and discrete conditions of the Creative Commons wavelet transform (DWT). These solutions achieve effects through filters in the frequency Attribution (CC BY) license (https:// domain and reconstruction in the spatial domain, such as homomorphic filtering, which creativecommons.org/licenses/by/ may result in the loss of potentially useful visual cues [3]. 4.0/).

Sensors 2021, 21, 845. https://doi.org/10.3390/s21030845 https://www.mdpi.com/journal/sensors Sensors 2021, 21, 845 2 of 21

To conduct an analysis from the perspective of the image physical process, Retinex theory is proposed to simulate the relationship between the illumination component and the reflection component of an image [4,5]. A series of methods were derived, such as the single-scale Retinex (SSR) algorithm [6] and multiscale Retinex (MSR) algorithm [7], to enhance the image details. However, the naturalness of the images may be destroyed, and it is unreasonable to treat only the reflectance layer as the enhanced image [8]. As a spatial-frequency analysis tool, the DWT is applied to decompose and enhance image features at different resolutions. It has been utilized by researchers in the fields such as image resolution enhancement [9] and image denoising [10]. Existing methods have difficulty balancing correction, naturalness preser- vation, restoration, and algorithmic efficiency. A simple but efficient algorithm for image illuminance perception and correction in the wavelet domain is proposed in this paper. The DWT is used to separate the illuminances in the low-frequency subband, which will be enhanced by adaptive gamma correction considering both the spatial and statistical characteristics of the image. For naturalness preservation, adaptive punishment adjustment is applied for the high-frequency subband. Finally, a stabilization factor is designed for color restoration so that the extralow illumination can be corrected with noise suppression. To the best of our knowledge, no work has proposed an adaptive dual-gamma correction method in the wavelet domain. The rest of the paper is organized as follows: Section2 provides a brief discussion of related works. Section3 presents the detailed process of the proposed method. In Section4, the superiority of the proposed method is supported by experimental results and relevant evaluation with state-of-the-art models. Finally, the conclusions are presented in Section5.

2. Related Works To solve the problems mentioned above, improvements have been proposed in earlier works. There are several variations of the HE method, such as contrast-limited adaptive histogram equalization (CLAHE) [11] and brightness-preserving bi-histogram equalization (BBHE) [12]. In the frequency domain, improved methods such as illuminance normal- ization based on homomorphic filtering [3], color image enhancement by compressed DCT [13], and the alpha-root method based on the quaternion Fourier transform [14] are proposed. The following methods are comparable to our work: • Improved gamma correction: For parameter adjustment, some adaptive methods are de- rived, such as adaptive gamma correction based on cumulative histogram (AGCCH) [15], adaptive gamma correction to enhance the contrast of brightness-distorted images [16], adaptive correction with weight distribution (AGCWD) method [17], and a 2-D adaptive gamma correction method [18], which takes into account the variable brightness map of image spatial information while excessive contrast enhancement may occur. In addi- tion, few methods consider both local and global enhancement, and overenhancement sometimes appears in some portions of the image. • Retinex-based model: Fu et al. [19] proposed a simultaneous illumination and re- flectance estimation (SIRE) method to preserve more image details when estimating the reflection intensity. Wang [20] used Retinex theory to construct an image prior model and used a hierarchical Bayesian model to estimate the model parameters and achieved good results. Cheng [21] proposed a nonconvex variational Retinex model to improve the brightness while maintaining the texture and naturalness of an image. These models based on Retinex theory can achieve pleasing reflection separation through iterations. However, the algorithms are time-consuming and may limit their practical applications. Low-light image enhancement via well-constructed illumination map estimation (LIME) was proposed by Guo [2]. Oversaturation in some portion of an image usually occurs. • Combining the wavelet transform approach: By introducing the wavelet transform, a nonlinear enhancement function was designed based on the local dispersion of the wavelet coefficients [21]. Zotin [22] proposed an algorithm combining the MSR Sensors 2021, 21, x FOR PEER REVIEW 3 of 20

• Combining the wavelet transform approach: By introducing the wavelet trans- form, a nonlinear enhancement function was designed based on the local disper- Sensors 2021, 21, 845 3 of 21 sion of the wavelet coefficients [21]. Zotin [22] proposed an algorithm combining the MSR algorithm with the wavelet transform algorithm and achieved a better correction effect in terms of efficiency. A dual-tree complex wavelet transform for algorithmlow-light with image the wavelet enhancement transform was algorithm proposed and in achieved[23]. However, a better it correction is unreasonable effect in termsto utilize of efficiency. only the low-frequency A dual-tree complex subband wavelet for illumination transform enhancement. for low-light The image im- enhancementage edges waswill proposedappear jagged in [23 after]. However, transformation it is unreasonable according to to our utilize experiments. only the low-frequency subband for illumination enhancement. The image edges will appear 3. Proposedjagged after Method: transformation GLAGC according to our experiments. 3.1. Algorithm Scheme 3. Proposed Method: GLAGC 3.1. AlgorithmGamma Schemecorrection [17] is a common method for illumination enhancement and is defined as: Gamma correction [17] is a common method for illumination enhancement and is defined as: I γ II' =×()I 0 = max× ( )γ (1) I Imax Imax (1) Imax 0 wherewhereI I'is is the the corrected corrected image, image,I maxImax isis thethe maximummaximum intensityintensity valuevalue ofof thethe originaloriginal image,image, IIis is thethe original original image, image, and and γ γis isthe the parameter. parameter. For For different different values values of γ, oftheγ resulting, the resulting image imagehas different has different enhancement enhancement results, results, as shown as shown in Figure in Figure 1. When1. When γ < 1, γlow-intensity< 1, low-intensity pixels pixelswill be will increased be increased more morethan thanhigh-intensity high-intensity . pixels. When When γ > 1,γ the> 1, opposite the opposite effect effect is gen- is generated.erated. When When γ =γ 1,= the 1, theinput input and and output output intensities intensities are areequal. equal.

FigureFigure 1.1. An example example of of gamma gamma correction, correction, the the enha enhancednced images images with with different different parameters parameters γ. (aγ) . (aOriginal) Original image. image. (b ()b γ) =γ 0.1.= 0.1. (c) (γc) =γ 0.3.= 0.3. (d) (γd )=0.8.γ =0.8. (e) (γe )= γ1.2.= 1.2.(f) γ ( f=) γ1.5.= 1.5.(g) The (g) Thecurve curve along along with with differentdifferent parameters parametersγ γ. .

TheThe limitationslimitations ofof thethe conventional conventional gamma gamma correction correction method method are are obvious: obvious:(1) (1) The The selectionselection ofof thethe parameters parameters requires requires experience. experience. (2)(2) SpatialSpatial informationinformation suchsuch as as uneven uneven lightinglighting of of the the image image is is not not considered. considered. (3) (3) The The overall overall illumination illumination cannotcannot be be perceived, perceived, andand overexposure overexposure sometimes sometimes occurs. occurs. ForFor this this reason, reason, a novela novel adaptive adaptive gamma gamma correction correction method, method, called called global global statistics statistics and localand spatiallocal spatial adaptive adaptive dual-gamma dual-gamma correction correction (GLAGC), (GLAGC), is proposed is proposed in this in section. this section. First, theFirst, V componentthe V component of from of HSV from model HSV ofmodel the inputof the image input isimage converted is converted to the logarithmic to the loga- domain.rithmic domain. Through Through the DWT, the the DWT, illumination the illumination information information of the image of the isimage obtained is obtained from the low-frequency subband LL. The dual-gamma correction γ(θ[χ,σ]) based on spatial and from the low-frequency subband LL. The dual-gamma correction γ(θ[χ,σ]) based on spa- statistical information is applied to subband LL: tial and statistical information is applied to subband LL:

γ(θ[χ,σ]) LLLL γθ()χσ LL0 =LLI '()=× I × ( ) [,] (2) MAXMAX I (2) MAXIMAX 0 wherewhereI MAXIMAX is thethe maximummaximum pixelpixel valuevalue ofof thethe LLLL subbandsubband andand LLLL' isis thethe correctedcorrected low-low- frequencyfrequency subband. subband. For naturalness preservation, adaptive punishment adjustment is applied in the LH, HL, and HH subbands. Then, the corrected V component is obtained through the inverse wavelet transform. Finally, the enhanced image is reconstructed by converting it to the RGB

through color restoration. The process flow of the proposed image enhancement method is shown in Figure2. Sensors 2021, 21, x FOR PEER REVIEW 4 of 20

For naturalness preservation, adaptive punishment adjustment is applied in the LH, HL, and HH subbands. Then, the corrected V component is obtained through the inverse Sensors 2021, 21, 845 wavelet transform. Finally, the enhanced image is reconstructed by converting it to4 ofthe 21 RGB color space through color restoration. The process flow of the proposed image en- hancement method is shown in Figure 2.

Figure 2. Flowchart of GLAGC method.

3.2. Luminance Extraction in the Wavelet Domain 3.2. Luminance Extraction in the Wavelet Domain According to Retinex theory, an image can be expressed as the multiplicative combi- According to Retinex theory, an image can be expressed as the multiplicative combi- nation of the reflection reflection intensity and the illumination brightness, namely: Sxy(,)=× Lxy (,) Rxy (,) (3) S(x, y) = L(x, y) × R(x, y) (3) where S(x, y) is the pixel information of the image and R(x, y) is the reflection intensity, reflectingwhere S(x ,they) issurface the pixel properties information of the of object the image color, and texture,R(x, yetc.) is that the reflectioncorrespond intensity, to the high-frequencyreflecting the surface information properties of the of image; the object L(x,y) color, is the texture, environmental etc. that illumination, correspond which to the dependshigh-frequency on the informationexternal lighting of the conditions image; L(x and, y) iscorresponds the environmental to the low-frequency illumination, whichinfor- mationdepends of onthe the image. external Since lighting the operation conditions in the and logarithmic corresponds domain to the is closer low-frequency to the visual in- characteristicsformation of the perceived image. by Since the thehuman operation eye, the in image the logarithmic is converted domain to the logarithmic is closer to do- the mainvisual to characteristics obtain the additive perceived combination by the human of reflection eye, the imageintensity is convertedand illumination to the logarith- bright- ness:mic domain to obtain the additive combination of reflection intensity and illumination brightness: s(xsxy,(,)y) = =+l( lxyx (,), y) + r rxy( (,)x, y) (4) where ss((xx,, yy)) = = log( log(SS(x(x, ,yy)),)), r(rx(,x ,y)y )= =log( log(R(Rx(, xy,)),y)), and and l(xl,( xy,) y=) log( = log(L(xL, (yx))., y To)). obtain To obtain the illuminationthe illumination component component l(x, yl),(x a, ycenter/surround), a center/surround Retinex Retinex method method such as such the as SSR the algo- SSR rithmalgorithm uses usesthe convolution the convolution of the of Gaussian the Gaussian function function and andthe image the image s(x, ys):(x , y): =∗ l(xlxy,(,)y) = s( sxyx (,), y) ∗ G Gxy( (,)x, y) (5)

()- xy22+ −(x2+y2) c2 (6) G(xGxy, y(,)) = k=×× ke e c2 (6) where * is a convolution operation, G(x, y) is the Gaussian convolution function, G(x, y) = 1; where * is a convolution operation, G(x, y) is the Gaussian convolution function,s ∬G(x, y) =c is1; thec is scalethe scale factor, factor, and andk is thek is normalization the normalization constant. constant. The MSRThe MSR algorithm algorithm uses multiscaleuses mul- Gaussian functions: tiscale Gaussian functions: N l(x, y) = vn × {s(x, y) ∗ Gn(x, y)} (7) ∑n=1 N lxy(,)=×∗ϖ {(,) sxy G (,)} xy  n=1 nn (7) where Gn (x, y) is the Gaussian function of the n-th scale and the weight vn satisfies N = where∑n=1 v Gn n (x1., y) is the Gaussian function of the n-th scale and the weight ϖn satisfies N State-of-the-arts methods like SSR and MSR obtain illumination feature by using ϖ = 1 Gaussian n=1 n convolution within certain perception domain. Gaussian convolution will cause computational complexity. Moreover, the neighboring pixel information also includes the edge of the image, texture and other redundant details that do not contribute to the illuminance features. This paper takes a different approach that illumination extraction is conducted in the low frequency sub-band of the wavelet domain, while the details of Sensors 2021, 21, 845 5 of 21

image are extracted in high frequency sub-band. The DWT [24] of a digital image f (x, y) can be expressed as:

1 M−1 N−1 W (j , m, n) = f (x, y)ϕ (x, y) (8) φ 0 M × N ∑x=0 ∑y=0 j0,m,n

i 1 M−1 N−1 i W (j0, m, n) = f (x, y)ψ (x, y) i ∈ {H, V, D} (9) ψ M × N ∑x=0 ∑y=0 j0,m,n where ϕ is the scale function; ψ is the wavelet function; (M, N) is the size of the image; j0 is the initial scale; Wφ (j0, m, n) is the low-frequency wavelet coefficient, which is an approximation of f (x, y); index i identifies the directional wavelets in terms of values of H, i V, and D; and W ψ(j, m, n) is the high-frequency wavelet coefficient. When the scale j ≥ j0, it means the horizontal, vertical, diagonal details in three directions. The DWT uses low-pass and high-pass filters to decompose the pixel information of the image into 4 subbands, namely, LL, LH, HL, and HH. LL denotes the low-pass subband, and LH, HL, and HH denote the vertical, horizontal and diagonal subbands, respectively, where:

LL = Wφ(j0, m, n) (10)

i LH, HL, HH = Wψ(j, m, n), i ∈ {H, V, D} (11) From the perspective of the frequency domain, the high-frequency subband after applying the wavelet transform contains only detailed information, such as the edge of the image object, which ensures that the illumination component of the image is included in the low-frequency subband LL. Therefore, the illumination of the image can be corrected by using only the low-frequency subband. After illuminance correction in the low-frequency subband, we can use the inverse wavelet transform to obtain the reconstructed image:

0 i 0 O(x, y) = iDWT{Wφ(j0, m, n), Wψ (j, m, n)} (12)

0 i0 where W φ(j0, m, n), Wψ (j, m, n) is the corrected coefficient, iDWT{} represents the inverse wavelet transform, and O(x, y) denotes the corrected image. Next, the proposed adaptive dual-gamma correction method for low-frequency subband LL based on the extracted illumination features is described.

3.3. Local Spatial Adaptive Gamma Correction (LSAGC) A spatial luminance distribution feature (SLDF) is proposed, which is defined as:

( ) = N × { ∗ ( )} SLDF x, y ∑n=1 vn LL Gn x, y (13) where SLDF(x, y) obtains the pixel neighborhood information by applying a convolution operation to estimate the local spatial distribution of the image0s illumination. Figure3 illustrates the SLDF(x, y) of an image, its frequency domain analysis diagram and time-consumption analysis of our method and MSR. In Figure3b, the Y-axis denotes the average Fourier log intensity [25] of the image, and the X-axis denotes the frequency. In Figure3c, the Y-axis denotes the average time consumption of illumination extraction, and the X-axis denotes the image size. Sensors 2021, 21, x FOR PEER REVIEW 6 of 20

(1) The frequency components of the illumination extracted by the MSR algorithm are included in the frequency components of the LL subband, which means that the illu- mination of the image can be extracted only in the LL subband. (2) As the frequency increases, the amplitude of SLDF (x, y) attenuates faster. This prop- erty is helpful in preserving the image details from the perspective of the local illu- mination characteristics. Sensors 2021, 21, 845 (3) For images with common image sizes, the proposed SLDF illumination extraction6 of 21 time is much less than that of the MSR algorithm, and the benefit of the SLDF scheme compared with the MSR algorithm increases as the image size increases.

FigureFigure 3.3. ((aa)) IlluminanceIlluminance featurefeature SLDFSLDF((xx,y,y)) extraction.extraction. ((b)) Frequency domaindomain analysis.analysis. (c) Time-consumption ofof SLDF((xx,y,y)) andand MSR.MSR.

FromThe uneven Figure 3spatialb,c we distribution found: of the image illuminance appears as overexposure or (1)underexposureThe frequency in certain components areas. The of the proposed illumination local extractedspatial adaptive by the MSR gamma algorithm correction are (LSAGC)included method in theis applied frequency to LL components, which is defined of the as:LL subband, which means that the

illumination of the image can be extracted only in theσ LL subband. γΘ()(= MI / ) (2) As the frequency increases, the amplitudeχ SLDF of MAXSLDF (x, y) attenuates faster. This(14) property is helpful in preserving the image details from the perspective of the local σ =×2[MSLDFxyI − (,)]/ illumination characteristics. SLDF MAX (15)

(3)whereFor MSLDF images is the with average common of SLDF image (x, sizes, y) and the σ proposedis the differenceSLDF illuminationbetween the extractionbrightness of a certaintime is muchpixel and less the than average that of theintensity. MSR algorithm, When the and spatial the benefitbrightness of the ofSLDF the imagescheme is evenlycompared distributed, with ( theMSLDF MSR /IMAX algorithm) is close increases to 1, and as the the γ image(Θχ) correction size increases. ability becomes weak.The When uneven SLDF spatial (x, y) distributionis greater than of M theSLDF image, strong illuminance illumination appears appears, as overexposure which makes orσ < underexposure 0; thus, the illumination in certain will areas. be Thereduced proposed by (14). local In spatialcontrast, adaptive the brightness gamma of correction the dark (LSAGC)area will methodbe increased, is applied so uneven to LL, lighting which is is defined improved as: through adaptive correction. Ap- χ plying γ(Θ ) to LL: σ γ(Θχ) = (MSLDF/IMAX) (14)

LL γΘ()χ LL=× I () σ = 2 × [M LS− MAXSLDF(x, y)]/I (15)(16) SLDF IMAX MAX where M is the average of SLDF (x, y) and σ is the difference between the brightness of where LLSLDFLS indicates the low-frequency subband corrected by LSAGC. a certainFigure pixel 4 illustrates and the average the LSAGC intensity. results. When An the image spatial with brightness uneven illumination of the image is evenlyshown distributed,in Figure 4a. ( MTheSLDF LL/ subbandIMAX) is closeobtained to 1, by and wavelet the γ( Θtranχ) correctionsform is shown ability in becomesFigure 4b, weak. and When SLDF (x, y) is greater than MSLDF, strong illumination appears, which makes σ < 0; SLDF (x, y) is shown in Figure 4c. The reconstructed image by iDWT{LLLS,Wψi (j, m, n)} is thus, the illumination will be reduced by (14). In contrast, the brightness of the dark area shown in Figure 4d. will be increased, so uneven lighting is improved through adaptive correction. Applying It can be seen from Figure 4d that although the uneven spatial illumination distribu- γ(Θ ) to LL: tionχ of the image has been corrected, the overall brightness is still low, resulting in unclear LL γ(Θχ) details, such as the human faceLL andLS = horseIMAX body.× ( Further,) more overall luminance correc-(16) tion is required. IMAX where LLLS indicates the low-frequency subband corrected by LSAGC. Figure4 illustrates the LSAGC results. An image with uneven illumination is shown in Figure4a. The LL subband obtained by wavelet transform is shown in Figure4b, and i SLDF (x, y) is shown in Figure4c. The reconstructed image by iDWT{LLLS,Wψ (j, m, n)} is shown in Figure4d.

Sensors 2021, 21, x FOR PEER REVIEW 7 of 20

Sensors 2021, 21, 845 7 of 21 Sensors 2021, 21, x FOR PEER REVIEW 7 of 20

Figure 4. Result of the LSAGC method. (a) Original image. (b) LL.(c) SLDF (x, y) of LL.(d) Recon- FigureFigure 4. 4. Result Result of the of LSAGC the LSAGC method. method.(a) Original ( aimage.) Original (b) LL. (image.c) SLDF ((x,b y)) LLof LL. (.c ()d SLDF) Recon- (x, y) of LL. (d) Recon- structedstructed image image after after applying applying LSAGC. LSAGC. structed image after applying LSAGC. 3.4. ItGlobal can be Statistics seen from Adaptive Figure Gamma4d that althoughCorrection the (GSAGC) uneven spatial illumination distribution of the image has been corrected, the overall brightness is still low, resulting in unclear 3.4.3.4.1. Global Global StatisticsStatistical Luminance Adaptive Feature Gamma (GSLF) Correction (GSAGC) details, such as the human face and horse body. Further, more overall luminance correction is3.4.1. required.The Global information Statistical entropy Luminance of the image repres Featureents the (GSLF) aggregation feature of the gray- scale value distribution, which is defined as: 3.4. GlobalThe Statistics information Adaptive entropy Gamma Correction of the (GSAGC)image represents the aggregation feature of the gray- =−255 × Entropy = piilog 2 p (17) Globalscale value Statistical distribution, Luminance Feature which (GSLF) is definedi 0 as:

whereThe pi informationis the probability entropy of a ofcertain the imagegrayscale represents value. Figure the aggregation 5a–f show images feature of of differ- the 255 grayscaleent luminance value conditions distribution, with which their is definedEntropy as: distribution=− histograms. p ×log When 2 p the image is  i =0 ii (17) properly exposed, the grayscale distribution255 histograms show uniform distributions, and their information entropy is theEntropy largest,= −as∑ shown= pi ×in logFigure 2pi 5g. (17) where pi is the probability of a certaini 0 grayscale value. Figure 5a–f show images of differ- whereent luminancepi is the probability conditions of a certain with grayscale their value.grayscale Figure 5distributiona–f show images histograms. of different When the image is luminanceproperly conditionsexposed, with the their grayscale grayscale distribution distribution histograms. histograms When show the image uniform is distributions, and properly exposed, the grayscale distribution histograms show uniform distributions, and their information information entropy entropy is the largest, is the as largest, shown in as Figure shown5g. in Figure 5g.

Figure 5. Comparison of the entropy among different images. (a–f) Images with different lumi- nance values. (g) Entropy comparison.

The probability density function (pdf) and cumulative distribution function (cdf) of the image are defined as follows: = pdf() i ni / N (18)

i Figure 5. Comparison of the entropy amongcdf different() i= images. pdf ( k ( )a–f) Images with different luminance(19) Figure 5. Comparison of the entropy among k =0 different images. (a–f) Images with different lumi- values. (g) Entropy comparison. nance values. (g) Entropy comparison. where i is the pixel intensity, ni is the number of pixels with intensity i, and N is the total numberThe probabilityof pixels in densitythe image. function According (pdf ) to and the cumulative maximum distributiondiscrete entropy function theorem, (cdf ) the of theimage imageThe with are probability the defined largest as entropy follows: density has a functionuniformly distributed(pdf) and grayscale cumulative histogram, distribution and its function (cdf) of cdf(i) has linear characteristics, namely: the image are defined as follows:pd f (i) = ni/N (18) ∀=i,(),() pdf i c cdf i =× c i (20) i = cd f (i) = pdpdf f (k)() i ni / N (19) (18) Here, (20) is converted to the logarithmic∑k= 0domain: where i is the pixel intensity, ni is the number of pixels with intensityi i, and N is the total cdf() l=cdf exp()() l i = pdf ( k ) (21) number of pixels in the image. According to the maximum discretek =0 entropy theorem, the (19)

where i is the pixel intensity, ni is the number of pixels with intensity i, and N is the total number of pixels in the image. According to the maximum discrete entropy theorem, the image with the largest entropy has a uniformly distributed grayscale histogram, and its cdf(i) has linear characteristics, namely:

∀=i,(),() pdf i c cdf i =× c i (20)

Here, (20) is converted to the logarithmic domain:

cdf() l= exp() l (21) Sensors 2021, 21, 845 8 of 21

image with the largest entropy has a uniformly distributed grayscale histogram, and its cdf (i) has linear characteristics, namely:

∀i, pd f (i) = c, cd f (i) = c × i (20)

Here, (20) is converted to the logarithmic domain:

cd f (l) = exp(l) (21)

where l = log(c × i) and c is a constant. In our research, the cdf (l) of subband LL of the image with the largest entropy in the logarithmic domain through wavelet decomposition is constructed as an intensity-guided distribution (IGD) function. It plays a guiding role in image illumination correction. The IGD function is defined as:

1 IGD(l) = exp(l), l ∈ (0, Imax) (22) exp(Imax)

The pdf (l) of the subband LL is normalized as:

pd f (l) − pd fmin pd fnorm(l) = (23) pd fmax − pd fmin

where pdf max and pdf min are the maximum and minimum values of the image pdf, respec- tively. According to the difference between cdf (l) of the input image and IGD(l) of the ideal image with the largest entropy, pdfGW(l) and cdfGW(l) are designed as follows:

1−{1−[cd f (l)−IGD(l)]} pd fGW (l) = pd f (l)norm (24)

( ) = l ( ) cd fGW l ∑k=0 pd fGW K (25) Figure6 demonstrates three different images of the same scene, which appear un- derexposed in Figure6a, properly exposed in Figure6d and overexposed Figure6g. A comparison of pdfnorm (l) and pdfGW(l) is shown in Figure6b,e,h, respectively. The relation- ship among cdf (l), cdfGW(l) and IGD(l) is shown in Figure6c,f,i, respectively. The luminance distribution can be estimated according to the difference between cdfGW(l) and cdf (l). For an underexposed image, the area enclosed by cdf (l) and the X-axis is far larger than the area enclosed by cdfGW(l) and the X-axis. For a properly exposed image, the area enclosed by cdf (l) and the X-axis is close to the area enclosed by cdfGW (l) and the X-axis. For an overexposed image, the area enclosed by cdf (l), and the X-axis is close to the area enclosed by cdfGW (l) and the X-axis but smaller than that of IGD(l). For the correction of the overall illumination brightness of an image, a global statistical luminance feature (GSLF) is designed to evaluate the difference between cdf (l) and cdfGW(l), which is defined as: cd fGW (l) − cd f (l) GSLF = (26) ∑ cd f (l) In our research, a global statistics adaptive gamma correction (GSAGC) method is proposed as γ(Θσ), which is applied to subband LL:

LL γ(Θσ) LLGS = IMAX × ( ) (27) IMAX

where LLGS indicates the corrected low-frequency subband by GSAGC. Through a training set containing images with various illuminance conditions, the relationship between γ(Θσ) and the GSLF will be estimated. Sensors 2021, 21, x FOR PEER REVIEW 8 of 20

where l = log(c×i) and c is a constant. In our research, the cdf(l) of subband LL of the image with the largest entropy in the logarithmic domain through wavelet decomposition is con- structed as an intensity-guided distribution (IGD) function. It plays a guiding role in im- age illumination correction. The IGD function is defined as:

=∈1 IGD() l exp(),l l (0, Imax ) (22) exp(Imax ) The pdf(l) of the subband LL is normalized as: pdf() l− pdf pdf() l = min norm − (23) pdfmax pdf min

where pdfmax and pdfmin are the maximum and minimum values of the image pdf, respec- tively. According to the difference between cdf(l) of the input image and IGD(l) of the ideal image with the largest entropy, pdfGW(l) and cdfGW(l) are designed as follows:

= 1{1[−−cdf () l − IGD ()]} l pdfGW() l pdf () l norm (24)

l cdf() l= pdf() K GW  k =0 GW (25) Figure 6 demonstrates three different images of the same scene, which appear under- exposed in Figure 6a, properly exposed in Figure 6d and overexposed Figure 6g. A com- parison of pdfnorm (l) and pdfGW(l) is shown in Figure 6b,e,h, respectively. The relationship among cdf(l), cdfGW(l) and IGD(l) is shown in Figure 6c,f,i, respectively. The luminance dis- tribution can be estimated according to the difference between cdfGW(l) and cdf(l). For an underexposed image, the area enclosed by cdf(l) and the X-axis is far larger than the area enclosed by cdfGW(l) and the X-axis. For a properly exposed image, the area enclosed by cdf(l) and the X-axis is close to the area enclosed by cdfGW (l) and the X-axis. For an overex- Sensors 2021, 21, 845 posed image, the area enclosed by cdf(l), and the X-axis is close9 of to 21 the area enclosed by cdfGW (l) and the X-axis but smaller than that of IGD(l).

Sensors 2021, 21, x FOR PEER REVIEW 9 of 20

In our research, a global statistics adaptive gamma correction (GSAGC) method is proposed as γ(Θσ), which is applied to subband LL:

γΘ =×LL ()σ LLGS I MAX () (27) IMAX Figure 6. (a,d,g) Original image. (b,e,h) The comparison of pdfnorm(l) and pdfGW (l). (c,f,i) The whereFigure LL GS6. indicates (a,d,g) Originalthe corrected image. low-frequency (b,e,h) The subband comparison by GSAGC. of Throughpdfnorm(l) a and training pdfGW (l). (c,f,i) The com- comparison of cdf (l), cdfGW (l) and IGD(l). setparison containing of cdf images(l), cdf withGW various(l) and illuminance IGD(l). conditions, the relationship between γ(Θσ) and theTraining GSLF Datasets:will be estimated. This article has established an image dataset collected from related worksTraining [2For,15,18 the Datasets:,20 –correction22,26 ]This containing article of hasthe different established overall luminance illumination an image conditions, dataset brightness includingcollected from underexpo- of related an image, a global statis- sure,works proper [2,15,18,20–22,26] exposure, and containing uneven exposure.different luminance conditions, including underexpo- sure,ticalLoss proper luminance Function: exposure, To featurejudge and uneven whether (GSLF exposure. the overall) is designed illumination to intensity evaluate of an the image difference satisfies between cdf(l) and thecdf maximumGWLoss(l ),Function: which entropy To is judgedefined criterion, whether weas: introduce the overall the illumination information intensity entropy of loss an image function satis- to obtainfies the the maximum global statistics entropy adaptive criterion, gamma we intrγoduce(Θσ), namely:the information entropy loss function to obtain the global statistics adaptive gamma γ(Θσ), namely: − cdfGW () l cdf () l GSLFN =  (26) ( ) = − N ( ) LossEntropy D =−∑ Entropy Dα cdf() l (28) LossEntropy () Dα=1 Entropy() Dα (28) α =1 i Dα = iDWT= {LLGS, Wψ(i j, m, n)} (29) DiDWTLLWjmnαψ{,(,,)}GS (29) where D is the training dataset, Dα is a reconstruction sample, and N is the number of where D is the training dataset, Dα is a reconstruction sample, and N is the number of samples in the training dataset. When the information entropy loss function of the recon- samples in the training dataset. When the information entropy loss function of the recon- structed images is minimized, the regression curve indicating the relationship between structed1× 1images×N is minimized,1×1×N the regression curve indicating the relationship between γ(Θσ) and GSLF is obtained in Figure7: γ(Θσ)1×1×N and GSLF1×1×N is obtained in Figure 7.: 2 γ(Θσγ) = 8.224 × ×−×+GSLF 2− 5.534 × GSLF + 1.093 (30) (Θσ )=8.224 GSLF5.534 GSLF 1.093 (30)

Figure 7. γ GSLF Figure 7. Regression curve of γ(Θσ)) and GSLF.. Confidence Confidence interval is 95%.

According to the above, the proposed adaptive dual-gamma correction function, GLAGC, which takes into account the γ(Θχ) by LSAGC and the γ(Θσ) by GSAGC, is given as: γγγΘΘΘ=× ()()()[,]χ σ χ σ (31)

3.5. Smoothness Preservation Since GLAGC is adopted in the low-frequency subband LL in the wavelet domain, the high-frequency subband needs to be adjusted correspondently. Otherwise, jaggedness will appear at the image edges after inverse wavelet transformation, as shown in Figure 8. Thus, we introduce a smoothness adjustment to the wavelet high-frequency subband, denoted by:

ii=×Θ WLψγψ() W (32) Sensors 2021, 21, 845 10 of 21

According to the above, the proposed adaptive dual-gamma correction function, GLAGC, which takes into account the γ(Θχ) by LSAGC and the γ(Θσ) by GSAGC, is given as: γ(Θ[χ,σ]) = γ(Θχ) × γ(Θσ) (31)

3.5. Smoothness Preservation Since GLAGC is adopted in the low-frequency subband LL in the wavelet domain, the high-frequency subband needs to be adjusted correspondently. Otherwise, jaggedness will appear at the image edges after inverse wavelet transformation, as shown in Figure8 . Sensors 2021, 21, x FOR PEER REVIEW Thus, we introduce a smoothness adjustment to the wavelet high-frequency subband,10 of 20 denoted by: i i Wψ = L(Θγ) × Wψ (32) i wherewhere WWψi ψisis the the high-frequency high-frequency wavelet wavelet coefficient coefficient and Land(Θγ )L is(Θ theγ) adjustmentis the adjustment coefficient. coeffi- cient.Considering Considering that that the the high-frequency high-frequency subbands subbands in the in three the three directions directions have have the same the same importance,importance, the the same same punishment punishment coefficientcoefficient is is used. used.

FigureFigure 8. 8.(a)( aOriginal) Original image. image. ( (bb) Reconstructed image image by by (29). (29).

According to discrete wavelet inverse transform, the image reconstructed by the scale According to discrete wavelet inverse transform, the image reconstructed by the scale coefficients is defined as s1 (x, y), the image reconstructed by the wavelet coefficients is coefficients is defined as s1 (x, y), the image reconstructed by the wavelet coefficients is denoted by s2(x, y), and the final reconstructed image ς(x, y) is defined as: denoted by s2(x, y), and the final reconstructed imageς(,)xy is defined as: ς(x, y) = s1(x, y) + s2(x, y) (33) ς =+ (,)xy s12 (,) xy s (,) xy (33) 1 s (x y) = √ W (j m n) (x y) 1 , ∑M ∑N φ 0, , ϕj0,m,n , (34) = MN1 ϕ sxy(,)Wφ ( jmn , ,) (,) xy 10MN jmn0 ,,(34) 1 MN i i s2(x, y) = √ Wψ(j, m, n)ψ (x, y) (35) MN ∑ ∑ ∑M ∑N j,m,n 1i=H,V,D j=j0 sxy(,)= Wii (, jmn ,)ψ (,) xy 2 MNψ jmn,, (35) Figure9 shows the relationship between== the images reconstructed by the scale coeffi- MN iHVDjj,, 0 cients and the wavelet coefficients. According to the correlation between adjacent pixels in theFigure image, 9 whenshows the the 3 neighboring relationship pixels between are on th ae straight images line, reconstructed the edge of theby objectthe scale can coef- ficientsbe considered and the wavelet smooth andcoefficients. not jagged; According we define to it asthe the correlation edge smoothness between preservation adjacent pixels in theconstraint, image,namely: when the 3 neighboring pixels are on a straight line, the edge of the object 2 × ς(x + 1, y) = ς(x, y) + ς(x + 2, y) (36) can be considered smooth and not jagged; we define it as the edge smoothness preserva- tion constraint, namely:

2×+ςςς (xy 1,) = (,) xyxy ++ ( 2,) (36)

Substituting (33) into (36) yields: × +++ = + + +++ 2[(sxysxy12 1,) ( 1,)][(,) sxysxysxysx 1212 (,)][( 2,) ( 2,)] y (37)

The low-frequency coefficient after adaptive gamma correction is defined as Wϕ'(j0, m, n); the corresponding reconstructed image of s1'(x, y) is defined according to (34): Wjmn(,,) ==11ϕϕφ 0 γ sxy'( , ) Wjmnφ '( , , ) ( xy , ) I ( ) ( xy , ) 1 0jmn0 ,, MAX jmn0 ,, MNMN MN MN IMAX 1−γ (38) I γ = MAX ϕ ((,,))Wjmnφ (,) xy  0,,jmn0 MN MN

The gradient comparison of any pixel (xi, yi) between s1 (x, y) and s1′(x, y) is:

+Δγ − γ Δsxy'( , ) −γ ((,,))Wjmnφφ Wjmn (,,) −−γγ 1 ii=×1 00ii ii ≈××11γ IIWjmnφ (, ,) (39) Δ+Δ−MAX MAX 0 i i sxy100(,)ii ( Wjmnφφ (,ii ,) ) Wjmn (,ii ,)

The high-frequency coefficients are adjusted by L(Θγ) to obtain the reconstructed im- age s'2 (x, y): Sensors 2021, 21, x FOR PEER REVIEW 11 of 20

+∞ L()Θγ sxy'( , )== Wjmnii ( , , )ψ ( xyL , ) (Θ )× sxy ( , ) 2 MNψγjmn,, 2 (40) == MN iHVDjj,, 0

By substituting (39) and (40) into the edge smoothness preservation constraint (37), the punishment coefficient can be obtained:

=××11−−γγγ LI()ΘγφMAX Wjmn (,,)000 (41)

Sensors 2021, 21, 845 L(Θγ) can adjust the high-frequency coefficient adaptively 11with of 21 γ(Θ[χ,σ]) to maintain the smoothness of the image edges.

Figure 9. Relationship between adjacent pixels of ς(x, y), s1(x, y), s2(x, y). Figure 9. Relationship between adjacent pixels of ς(,)xy , s1(x, y), s2(x, y). Substituting (33) into (36) yields: 3.6. Color Restoration 2 × [s1(x + 1, y) + s2(x + 1, y)] = [s1(x, y) + s2(x, y)] + [s1(x + 2, y) + s2(x + 2, y)] (37)

The HSV color model is used in our research because it is consistent0 with the human The low-frequency coefficient after adaptive gamma correction is defined as Wφ (j0, m, 0 neye’s); the correspondingperception of reconstructed color. It includes image of s1 three(x, y) is ch definedaracteristics: according hue to (34): (H), saturation (S) and value (V). The V component represents the luminance intensity. GLAGC is performed on the V W (j ,m,n) γ 0( ) = √ 1 0( ) ( ) = √ 1 ( φ 0 ) ( ) s1 x, y ∑ ∑ Wφ j0, m, n ϕj0,m,n x, y ∑ ∑ IMAX I ϕj0,m,n x, y MNcomponent.M N To restore the colorMN M N informationMAX of the observed image, the output color im- 1−γ (38) = IMAX√ ( ( ))γ ( ) ∑ ∑ageWφ inj0, mRGB, n colorϕj0,m,n xspace, y can be obtained by a linear transform [21], and the following im- MN M N proved operations are defined: 0 The gradient comparison of any pixel (xi, yi) between s1 (x, y) and s1 (x, y) is:

0 γ γ Vxy'( , ) ∆s1 (xi, yi) 1−γ (Wφ(j0, mi, ni) + ∆) − Wφ(j0, mi,Rxyni)'( , )1−=γ Rxyγ− (1 , ) = IMAX × ≈ IMAX × γ × W+φ(ζj0, mi, ni) (39) ∆s1(xi, yi) (Wφ(j0, mi, ni) + ∆) − Wφ(j0, mi, ni) Vxy(,) (,) xy

The high-frequency coefficients are adjusted by L(ΘVxyγ)'( to , obtain ) the reconstructed 0 Gxy'( , )= Gxy ( , ) (42) image s 2 (x, y): Vxy(,)+ζ (,) xy ( ) +∞ 0 L Θγ i i Vxy'( , ) s2 (x, y) = √ Wψ(j, m,=n)ψj,m,n(x, y) = L(Θγ) × s2(x, y) (40) MN ∑ ∑ ∑M ∑NBxy'( , )Bxy ( , ) i=H,V,D j=j0 Vxy(,)+ζ (,) xy By substituting (39) and (40) into the edge smoothness preservation constraint (37), thewhere punishment V(x, y), coefficient R(x, y), can G( bex, obtained:y), and B(x, y) are the V, R, G, and B components before correc-

tion. V'(x, y), R'(x, y), G'(x, y),− and B'(x, y) are the− corresponding components after correc- L(Θ ) = I1 γ × γ × W (j , m , n )γ 1 (41) tion. ζ(x, y) is an adaptiveγ stabilityMAX factorφ 0 that0 0 plays a role in low-illumination noise sup-

pression,L(Θγ) can which adjust is the defined high-frequency as: coefficient adaptively with γ(Θ[χ,σ]) to maintain the smoothness of the image edges. Vxy'( , ) 3.6. Color Restoration ζβ(,)xy =× (43) The HSV color model is used in our research because it is consistentVxy(,) with the human eye’s perception of color. It includes three characteristics: hue (H), saturation (S) and value (V).where The Vβ componentis the adjustment represents thecoefficient. luminance intensity.In general, GLAGC β = is 0.005. performed on the V component.To sum To restore up, we the colordescribe information the algorithm of the observed of the image, proposed the output GLAGC color image method in Algorithm 1. Sensors 2021, 21, 845 12 of 21

in RGB color space can be obtained by a linear transform [21], and the following improved operations are defined: 0 0( ) = V (x,y) ( ) R x, y V(x,y)+ζ(x,y) R x, y 0 0( ) = V (x,y) ( ) G x, y V(x,y)+ζ(x,y) G x, y (42) 0 0( ) = V (x,y) ( ) B x, y V(x,y)+ζ(x,y) B x, y

where V(x, y), R(x, y), G(x, y), and B(x, y) are the V, R, G, and B components before correction. V0(x, y), R0(x, y), G0(x, y), and B0(x, y) are the corresponding components after correction. ζ(x, y) is an adaptive stability factor that plays a role in low-illumination noise suppression, which is defined as: V0(x, y) ζ(x, y) = β × (43) V(x, y) where β is the adjustment coefficient. In general, β = 0.005. To sum up, we describe the algorithm of the proposed GLAGC method in Algorithm 1.

Algorithm 1 Algorithm for the adaptive dual-gamma function for image illumination perception and correction in the wavelet domain (GLAGC) Algorithm’s inputs: Original image S(x, y) Algorithm’s output: Enhanced image O(x, y) Step (1):Convert to HSV space to obtain the V component Step (2):Convert image to the logarithmic domain v = log(V + 1) Step (3): Fast illuminance extraction in the LL subband by the wavelet transform Step (4): Illuminance feature extraction: Spatial luminance distribution feature (SLDF) Global statistical luminance feature (GSLF) Step (5): Adaptive dual-gamma correction γ(Θ[χ,σ]) for the LL subband γ(Θχ) (obtained by the SLDF) γ(Θσ) (obtained by the GSLF and Gamma training) Step (6): Smoothness preservation L(Θγ) for high-frequency coefficients Step (7): Inverse wavelet and inverse logarithmic transform Step (8): Color restoration

4. Experiments During the experiments, first, the performances of LSAGC and GSAGC are veri- fied. Next, image naturalness preservation through punishment adjustment and low- illumination noise suppression is illustrated. Then, the GLAGC method is qualitatively compared with several state-of-the-art methods. All the experiments are run in MAT- LAB R2017b for Windows 7 on a computer equipped with an Intel(R) Core (TM) i7- 4790 CPU at 3.60 GHz and 8 GB memory. All the test images are sourced from related work [2,15,18,20–22,26] and benchmarks that have been commonly used for performance verification. Four state-of-the-art algorithms were used for the comparison experiments, includ- ing the variational-based method SIRE [19], the AGCWD method combined with his- tograms [17], the 2-D adaptive gamma correction method (Sungmok Lee0s method) [18], and LIME based on Retinex theory [2]. All the parameters in the competing methods are chosen according to their original articles. Four evaluation indicators were selected in the experiments: (1) The computational cost of the algorithm; (2) The information entropy, which is used to quantify and evaluate the information richness of the enhanced image; Sensors 2021, 21, 845 13 of 21

(3) The absolute mean brightness error (AMBE) [27], which is used to evaluate illumi- nance retention, is defined as follows:

AMBE(x, y) = |xm − ym| (44)

where xm and ym represent the average value of the input image and output image, respectively. (4) The order error (LOE), which is used to evaluate the naturalness of image Sensors 2021, 21, x FOR PEER REVIEW enhancement [26]: 13 of 20

1 m n LOE = mn ∑i=1 ∑j=1 RD(i, j) m n enhanced image,RD( xrespectively., y) = ∑ ∑ UThe(L(x ,smallery), L(i, j))the⊕ ULOE(Le (xvalue, y), L eis,(i, j))the better the i=1 j=1 (45) naturalness of the original 1imagex > thaty can be maintained. U(x, y) = 0 else 4.1. LSAGC Tests Thiswhere sectionm, n iswill the discuss image the size, spatialRD(i, distributionj) is the relative characteristics order of pixels of different (i, j), ⊕ imagesis the and theexclusive influence or of (XOR) the proposed operator, LSAGC and L (functionx, y) and onL ethe(x, imagey) are spatial the original illumination image anddis- tribution.enhanced image, respectively. The smaller the LOE value is, the better the naturalness Figureof the original10 shows image two images that can with be maintained.uneven illumination distributions. The area where the lawn is located at the bottom of image (a) is in a weakly exposed state, and images (b) 4.1. LSAGC Tests and (c) are the conditions without or with the LSAGC function, respectively. At the bot- tom ofThis image section (c), by will LSAGC, discuss the the lawn spatial become distributions more obvious, characteristics and more of detailed different textures images areand also the highlighted. influence of Figure the proposed 10d shows LSAGC the no functionrmal exposure on the imageof the sky spatial in the illumination middle of thedistribution. image, and the indoor area next to it is weakly exposed. Without LSAGC, the sky area in theFigure middle 10 of shows the image two images becomes with saturated uneven after illumination enhancement, distributions. resulting The in areathe loss where of texturethe lawn and is locatedother information. at the bottom When of image LSAGC (a) is inused, a weakly the texture exposed of the state, sky and is not images overex- (b) posed,and (c) and are theinformation conditions is withoutnot lost. or with the LSAGC function, respectively. At the bottom of imageIt can (c), be byseen LSAGC, from the the above lawn two becomes exampl morees that obvious, LSAGC and sufficiently more detailed considers textures the spatialare also characteristics highlighted. of Figure the illumination 10d shows thedist normalribution exposure and redistributes of the sky the in uneven the middle spatial of illuminationthe image, and to make the indoor it more area uniform. next to Histogram it is weakly analysis exposed. is given Without in Figure LSAGC, 11. It the can sky be seenarea inthat the in middle the absence of the of image LSAGC, becomes more saturatedhigh pixel after values enhancement, will lead to resulting overexposure; in the LSAGCloss of texture can avoid and this other situation, information. and low-value When LSAGC pixels is used,have also the texturebeen better of the improved, sky is not overexposed, and information is not lost. improving the image illumination quality.

Figure 10. Effect of LSAGC. (a,,d)) OriginalOriginal images.images. (b,e) Enhanced images without LSAGC. (c,f) Enhanced images with LSAGC.

It can be seen from the above two examples that LSAGC sufficiently considers the spatial characteristics of the illumination distribution and redistributes the uneven spatial illumination to make it more uniform. Histogram analysis is given in Figure 11. It can be seen that in the absence of LSAGC, more high pixel values will lead to overexposure; LSAGC can avoid this situation, and low-value pixels have also been better improved, improving the image illumination quality.

Figure 11. Histogram analysis: (a) images from Figure 10a–c; (b) images in Figure 10d–f.

4.2. GSAGC Tests This experiment discusses the adaptive correction effect of the GSAGC method in the proposed algorithm on images with different global illumination values. Figures 12–14 shows three sets of images, in which each experimental input sample is five images with different exposures from dark to bright, and we process them with the proposed algo- rithm, the parameters of which have been trained. The experimental results show that for images with different exposures, the proposed algorithm can automatically perceive the exposure level and generate high-quality images with almost the same exposure. Figure Sensors 2021, 21, x FOR PEER REVIEW 13 of 20

enhanced image, respectively. The smaller the LOE value is, the better the naturalness of the original image that can be maintained.

4.1. LSAGC Tests This section will discuss the spatial distribution characteristics of different images and the influence of the proposed LSAGC function on the image spatial illumination dis- tribution. Figure 10 shows two images with uneven illumination distributions. The area where the lawn is located at the bottom of image (a) is in a weakly exposed state, and images (b) and (c) are the conditions without or with the LSAGC function, respectively. At the bot- tom of image (c), by LSAGC, the lawn becomes more obvious, and more detailed textures are also highlighted. Figure 10d shows the normal exposure of the sky in the middle of the image, and the indoor area next to it is weakly exposed. Without LSAGC, the sky area in the middle of the image becomes saturated after enhancement, resulting in the loss of texture and other information. When LSAGC is used, the texture of the sky is not overex- posed, and information is not lost. It can be seen from the above two examples that LSAGC sufficiently considers the spatial characteristics of the illumination distribution and redistributes the uneven spatial illumination to make it more uniform. Histogram analysis is given in Figure 11. It can be seen that in the absence of LSAGC, more high pixel values will lead to overexposure; LSAGC can avoid this situation, and low-value pixels have also been better improved, improving the image illumination quality.

Sensors 2021, 21, 845 14 of 21 Figure 10. Effect of LSAGC. (a,d) Original images. (b,e) Enhanced images without LSAGC. (c,f) Enhanced images with LSAGC.

FigureFigure 11. 11.Histogram Histogram analysis: analysis: ( a(a)) images images from from Figure Figure 10 10a–c;a–c; ( b(b)) images images in in Figure Figure 10 10d–f.d–f. 4.2. GSAGC Tests 4.2. GSAGC Tests This experiment discusses the adaptive correction effect of the GSAGC method in the This experiment discusses the adaptive correction effect of the GSAGC method in the proposed algorithm on images with different global illumination values. Figures 12–14 proposed algorithm on images with different global illumination values. Figures 12–14 shows three sets of images, in which each experimental input sample is five images with shows three sets of images, in which each experimental input sample is five images with Sensors 2021,, 21,, xx FORFOR PEERPEER REVIEWREVIEWdifferent exposures from dark to bright, and we process them with the proposed algorithm, 14 of 20 different exposures from dark to bright, and we process them with the proposed algo- the parameters of which have been trained. The experimental results show that for images rithm, the parameters of which have been trained. The experimental results show that for with different exposures, the proposed algorithm can automatically perceive the exposure images with different exposures, the proposed algorithm can automatically perceive the level and generate high-quality images with almost the same exposure. Figure 15 uses the exposure level and generate high-quality images with almost the same exposure. Figure GSLF15 usesas the the Y-axis GSLF of theasas abovethethe Y-axisY-axis three sets ofof thethe of experimental aboveabove threethree input setssets images ofof experimentalexperimental and the mean inputinput of imagesimages andand thethe image mean as of the the X-axis. image The as size the of theX-axis. circle The indicates size theof AMBEthe circle value. indicates the AMBE value.

FigureFigure 12. 12.Park. Park. (a) Input (a)) InputInput different differentdifferent global illumination globalglobal illuminationillumination image sequences. imageimage (b) sequences.sequences. Output almost ((b the)) OutputOutput same almostalmost thethe global illumination image sequence. same global illumination image sequence.

Figure 13. Office. (a) Input different global illumination image sequences. (b) Output almost the Figure 13. Office. (a)) InputInput differentdifferent globalglobal illuminationillumination imageimage sequences.sequences. ((b)) OutputOutput almostalmost thethe same global illumination image sequence. same global illumination image sequence.

Figure 14. toy. (a)) InputInput differentdifferent globalglobal illuminationillumination imageimage sequences.sequences. ((b)) OutputOutput almostalmost thethe samesame globalglobal illuminationillumination imageimage sequence.sequence.

Figure 15. Relation of AMBE, GSLF andand meanmean ofof graygray valuevalue forfor thethe Park,Park, thethe office,office, andand thethe flashflash toy.toy.

The GSLF defineddefined inin (26)(26) isis aa measuremeasure ofof thethe globalglobal statisticalstatistical illuminationillumination character-character- isticsistics ofof anan image.image. TheThe largerlarger thethe valuevalue is,is, thethe weakerweaker thethe exposure.exposure. AsAs shownshown inin FigureFigure Sensors 2021, 21, x FOR PEER REVIEW 14 of 20

Sensors 2021, 21, x FOR PEER REVIEW15 uses the GSLF as the Y-axis of the above three sets of experimental input 14images of 20 and the mean of the image as the X-axis. The size of the circle indicates the AMBE value.

15 uses the GSLF as the Y-axis of the above three sets of experimental input images and the mean of the image as the X-axis. The size of the circle indicates the AMBE value.

Figure 12. Park. (a) Input different global illumination image sequences. (b) Output almost the same global illumination image sequence. Figure 12. Park. (a) Input different global illumination image sequences. (b) Output almost the same global illumination image sequence.

Figure 13. Office. (a) Input different global illumination image sequences. (b) Output almost the Sensors 2021, 21, 845 same global illumination image sequence. 15 of 21 Figure 13. Office. (a) Input different global illumination image sequences. (b) Output almost the same global illumination image sequence.

FigureFigure 14. 14.14.Flash Flash toy. toy. (toy.a) Input ( a(a) )Input different Input different different global illumination global global illumination image illumination sequences. image image (b) sequences. Output sequences. almost (b the) Output (b) Output almost almost same global illumination image sequence. thethe same same global illuminationillumination image image sequence. sequence.

FigureFigure 15. 15.Relation Relation of AMBE, of AMBE,GSLF and GSLF mean and of gray mean value of forgray the value Park, the for office, the Park, and the the flash office, toy. and the flash Figuretoy. 15. Relation of AMBE, GSLF and mean of gray value for the Park, the office, and the flash toy. The GSLF defined in (26) is a measure of the global statistical illumination characteris- tics of anThe image. GSLF The defined larger the in value(26) is is, a the measure weaker theof exposure.the global As statistical shown in Figure illumination 15, character- as the brightness of the input image gradually increases, the average value of the image isticsThe of an GSLF image. defined The larger in (26) the is valuea measure is, the of weaker the global the exposure. statistical As illumination shown in Figure character- gradually increases. When GSLF decreases, the exposure level is increasing, and the AMBE isticsalso subsequently of an image. decreases, The indicatinglarger the that value the image is, the brightness weaker has the been exposure. maintained As shown in Figure and has not continued to increase when the image is properly exposed. This experiment shows that the proposed algorithm can enhance low-exposure images while maintaining normal-exposure images.

4.3. Naturalness Preservation This section will explain the impact of adjustment on the high-frequency coefficients and the impact of adaptive stabilization factors on the . Figure 16 shows the edge smoothness preservation test. When the high-frequency coefficients are not corrected, the edge smoothness of the image object will be destroyed. As shown in Figure 16b, when the adaptive adjustment is obtained when the edge smoothness preservation constraint is used, the image maintains the edge smoothness after being enhanced, improving the visibility of the images. Sensors 2021, 21, x FOR PEER REVIEW 15 of 20

15, as the brightness of the input image gradually increases, the average value of the image Sensors 2021, 21, x FOR PEER REVIEWgradually increases. When GSLF decreases, the exposure level is increasing, and15 ofthe 20 AMBE also subsequently decreases, indicating that the image brightness has been main- tained and has not continued to increase when the image is properly exposed. This exper- 1iment5, as the shows brightness that the of proposed the input algorithm image gradua can llyenhance increases, low-exposure the average images value ofwhile the imagemain- taininggradually normal-exposure increases. When images. GSLF decreases, the exposure level is increasing, and the AMBE also subsequently decreases, indicating that the image brightness has been main- tained4.3. Naturalness and has not Preservation continued to increase when the image is properly exposed. This exper- imentThis shows section that will the proposedexplain the algorithm impact of can ad justmentenhance low-exposureon the high-frequency images while coefficients main- tainingand the normal-exposureimpact of adaptive images. stabilization factors on the image quality. Figure 16 shows the edge smoothness preservation test. When the high-frequency 4.3.coefficients Naturalness are notPreservation corrected, the edge smoothness of the image object will be destroyed. As shownThis section in Figure will 16b, explain when the the impact adaptive of adjustmentadjustment ison obtained the high-frequency when the edge coefficients smooth- nessand thepreservation impact of constraintadaptive stabilization is used, the imfactorsage maintains on the image the edgequality. smoothness after being enhanced,Figure improving 16 shows thethe visibilityedge smoothness of the images. preservation test. When the high-frequency coefficientsFigure are17 illustrates not corrected, the low-illumination the edge smoothne noisess of suppression the image objecttest. When will bethe destroyed. input im- age is extremely weakly exposed, as Figure 17b shows, considerable noise will appear at Sensors 2021, 21, 845 As shown in Figure 16b, when the adaptive adjustment is obtained when the16 edge of 21 smooth- nesslow-exposure preservation areas constraint after color is used,restoration. the im ageBy the maintains adaptive the stability edge smoothness factor defined after in being (43), enhanced,the image qualityimproving is enhanced the visibility while of noise the images. is suppressed, as shown in Figure 17c. Figure 17 illustrates the low-illumination noise suppression test. When the input im- age is extremely weakly exposed, as Figure 17b shows, considerable noise will appear at low-exposure areas after color restoration. By the adaptive stability factor defined in (43), the image quality is enhanced while noise is suppressed, as shown in Figure 17c.

FigureFigure 16. 16.Effect Effect of of thethe smoothnesssmoothness preservation. preservation. (a ()a Original) Original image. image. (b ()b) Jagged Jagged image image after after en- enhancement.hancement. ( (c)Enhanced image image with with smoothness smoothness preservation. preservation. Figure 17 illustrates the low-illumination noise suppression test. When the input image is extremely weakly exposed, as Figure 17b shows, considerable noise will appear at low-exposureFigure 16. Effect areas of afterthe smoothness color restoration. preservation. By the adaptive(a) Original stability image. factor (b) Jagged defined image in (43), after en- thehancement. image quality (c)Enhanced is enhanced image while with noise smoothness is suppressed, preservation. as shown in Figure 17c.

Sensors 2021, 21, x FOR PEER REVIEW 16 of 20

4.4. Comparative Experiments We compare images under a series of illuminance scenarios through different algo- rithms, and the results obtained are shown below. As shown in Figures 18–20, a group of images with uneven illumination distributions are called urban, baby, and street. Figure 18 shows the experiments under the urban image. The AGCWD and SIRE methods cannot significantly enhance the dark areas surrounding the buildings. Moreo- ver,Figure the 17.AGCWD (a) Two method original causes images; saturation (b) Restored on imagesthe upper without part ofnoise the suppression; image; the LIME (c) Enhanced methodimages canwith enhance noise suppression. the overall brightness of the image, but it causes overexposure; Lee’s method and the proposed method achieve good performances. Figure 19 displays the results of the experiments on the baby image. The AGCWD and LIME methods both overenhance the background areas; moreover, the AGCWD methodFigureFigure 17. 17.cannot(a ()a Two) T increasewo original original the images; brightnessimages; (b) Restored(b )in Restored the imagesbaby’s images withoutclothes. without noiseThe result suppression;noise fromsuppression; Lee’s (c) Enhanced method (c) Enhanced isimagesimages overnormalized, with with noise noise suppression. andsuppression. similar results are obtained by the LIME method and the proposed method. 4.4. Comparative Experiments Figure 20 presents the situation for the Street image, for which the best performance is achievedWe compare by our images method. under The AGCWD a series of and illuminance SIRE methods scenarios cannot through enhance different the dark algo- ar- easrithms, at the and bottom the results of the obtained image; the are overall shown picture below. by As Lee’s shown method in Figures is still 18 dark;–20, a the group LIME of methodimages withseems uneven to yield illumination a bright image, distributions but it causes are called oversaturation urban, baby, in the and sky. street.

Figure 18. Urban image. ( a) Original image; ( b) result of Lee’s method; ( c) result of LIME; (d) result of AGCWD; (e) result of SIRE; (f) result of the proposed method.method.

Figure 19. Baby image. (a) Original image; (b) result of Lee’s method; (c) result of LIME; (d) result of AGCWD; (e) result of SIRE; (f) result of the proposed method.

Figure 20. Street image. (a) Original image; (b) result of Lee’s method; (c) result of LIME; (d) result of AGCWD; (e) result of SIRE; (f) result of the proposed method.

The other image samples (composed of the building, goddess, and landscape images) have evenly distributed spatial illumination but with different global illumination, as shown in Figures 21–23. Figure 21 reveals the algorithms’ performance in complex lighting conditions. For shadow areas in the Buildings image in Figure 18. Our method outperforms all the other Sensors 2021, 21, x FOR PEER REVIEW 16 of 20

4.4. Comparative Experiments We compare images under a series of illuminance scenarios through different algo- rithms, and the results obtained are shown below. As shown in Figures 18–20, a group of images with uneven illumination distributions are called urban, baby, and street. Figure 18 shows the experiments under the urban image. The AGCWD and SIRE methods cannot significantly enhance the dark areas surrounding the buildings. Moreo- ver, the AGCWD method causes saturation on the upper part of the image; the LIME method can enhance the overall brightness of the image, but it causes overexposure; Lee’s method and the proposed method achieve good performances. Figure 19 displays the results of the experiments on the baby image. The AGCWD and LIME methods both overenhance the background areas; moreover, the AGCWD method cannot increase the brightness in the baby’s clothes. The result from Lee’s method is overnormalized, and similar results are obtained by the LIME method and the proposed method. Figure 20 presents the situation for the Street image, for which the best performance is achieved by our method. The AGCWD and SIRE methods cannot enhance the dark ar- eas at the bottom of the image; the overall picture by Lee’s method is still dark; the LIME method seems to yield a bright image, but it causes oversaturation in the sky.

Sensors 2021, 21, 845 17 of 21 Figure 18. Urban image. (a) Original image; (b) result of Lee’s method; (c) result of LIME; (d) result of AGCWD; (e) result of SIRE; (f) result of the proposed method.

FigureFigure 19. Bab 19.yBaby image. image. (a) (Originala) Original image; image; (b (b) )result result of Lee’sLee’s method; method; (c )(c result) result of LIME; of LIME; (d) result (d) result of AGCWD; of AGCWD; (e) result (e of) result of SIRE;SIRE; (f) (resultf) result of of the the proposed proposed method.

FigureFigure 20. Stre 20.etStreet image. image. (a) (Originala) Original image; image; (b (b) )result result of Lee’sLee’s method; method; (c )(c result) result of LIME; of LIME; (d) result (d) result of AGCWD; of AGCWD; (e) result (e of) result of SIRE;SIRE; (f) (resultf) result of of the the proposed proposed method.

Figure 18 shows the experiments under the urban image. The AGCWD and SIRE The other image samples (composed of the building, goddess, and landscape images) methods cannot significantly enhance the dark areas surrounding the buildings. Moreover, Sensors 2021, 21, x FOR PEER REVIEWhave evenly distributed spatial illumination but with different global illumination,17 of as20 havethe evenly AGCWD distributed method causes spatial saturation illumination on the upper but with part ofdifferent the image; global the LIME illumination, method as showncan enhance in Figures the overall21–23. brightness of the image, but it causes overexposure; Lee’s method andFigure the proposed 21 reveals method the achievealgorithms’ good performance performances. in complex lighting conditions. For shadowmethods;Figure areas the LIME19 in displays the and Buildings AGCWD the results image methods of the in experiments Figure cannot 18. re onOurstore the method baby the image. outperforms of The the AGCWD dusk all area; the and otherLee’s methodLIME caused methods ripple both overenhancedistortions in the the background sky area. areas; moreover, the AGCWD method cannotFigure increase 22 is the the comparison brightness infor the the baby’s Goddess clothes. image. The Lee’s result method from Lee’s results method in excessive is contrastovernormalized, enhancement. and similarOverenhancement results are obtained is prod byuced the LIMEin the method face region and the by proposedthe LIME and method. AGCWD methods; furthermore, the AGCWD method cannot remove the shadows in the Figure 20 presents the situation for the Street image, for which the best performance is background.achieved by The our method.SIRE method The AGCWD achieves and the SIRE best methods naturalness cannot enhance preservation, the dark but areas the at com- putationalthe bottom cost of theis much image; greater the overall than picture that of by our Lee’s method, method which is stilldark; will be the discussed LIME method later. seemsFigure to yield23 shows a bright the image, landscape but it causesimage oversaturationwith pleasant in visual the sky. effects, which are used to test the Theperformance other image of samples avoiding (composed overexposu of there. building, The mountain goddess, in and the landscapebackground images) becomes bluehave and evenly loses distributedits original spatial color illuminationwith Lee’s method; but with differentoversaturation global illumination,occurs in the as LIME method;shown the in Figures AGCWD 21– 23and. SIRE methods and our method all have good results.

FigureFigure 21. Buildings 21. Buildings image. image. (a) (Originala) Original image; image; (b (b) )result result of of Lee’s Lee’s method;method; ( c()c result) result of of LIME; LIME; (d) ( resultd) result of AGCWD; of AGCWD; (e) (e) resultresult of SIRE; of SIRE; (f) result (f) result of the of the proposed proposed method. method.

Figure 22. Goddess image. (a) Original image; (b) result of Lee’s method; (c) result of LIME; (d) result of AGCWD; (e) result of SIRE; (f) result of the proposed method.

Figure 23. Landscape image. (a) Original image; (b) result of Lee’s method; (c) result of LIME; (d) result of AGCWD; (e) result of SIRE; (f) result of the proposed method.

Figure 24 provides more experimental results by GLAGC. Table 1 shows the entropy, LOE and AMBE performance of the different algorithms. The proposed algorithm achieves the maximum value of the average information entropy of the enhanced image, which reveals that the proposed algorithm can obtain the most abundant image infor- mation. In terms of the preservation of naturalness, the proposed method has the lowest LOE after the AGCWD method, which is inferior to ours. Regarding the AMBE, the max- imum is achieved by our method for dark illumination scenarios such as buildings, re- vealing its overall boosting on low illumination, while the minimum is obtained in nor- mal-exposure scenes (Landscape), which presents brightness maintenance. Table 2 shows the average computational costs of the different algorithms under the same computational conditions, and the image resolution is 512 × 512. It can be seen that the proposed algo- rithm can achieve good results in a short amount of time. Sensors 2021, 21, x FOR PEER REVIEW 17 of 20

methods; the LIME and AGCWD methods cannot restore the colors of the dusk area; Lee’s method caused ripple distortions in the sky area. Figure 22 is the comparison for the Goddess image. Lee’s method results in excessive contrast enhancement. Overenhancement is produced in the face region by the LIME and AGCWD methods; furthermore, the AGCWD method cannot remove the shadows in the background. The SIRE method achieves the best naturalness preservation, but the com- putational cost is much greater than that of our method, which will be discussed later. Figure 23 shows the landscape image with pleasant visual effects, which are used to test the performance of avoiding overexposure. The mountain in the background becomes blue and loses its original color with Lee’s method; oversaturation occurs in the LIME method; the AGCWD and SIRE methods and our method all have good results.

Sensors 2021, 21, 845 18 of 21 Figure 21. Buildings image. (a) Original image; (b) result of Lee’s method; (c) result of LIME; (d) result of AGCWD; (e) result of SIRE; (f) result of the proposed method.

FigureFigure 22. God 22. Goddessdess image. image. (a) ( aOriginal) Original image; image; ((bb)) result result of of Lee’s Lee’s method; method; (c) result (c) result of LIME; of (LIME;d) result (d of) AGCWD;result of (AGCWD;e) result (e) resultof of SIRE; SIRE; (f )(f result) result of theof the proposed proposed method. method.

FigureFigure 23. Lan 23.dscapeLandscape image. image. (a) (Originala) Original image; image; (b (b) )result result of of Lee’s method;method; ( c()c result) result of of LIME; LIME; (d) ( resultd) result of AGCWD; of AGCWD; (e) (e) resultresult of SIRE; of SIRE; (f) result (f) result of the of the proposed proposed method. method. Figure 21 reveals the algorithms’ performance in complex lighting conditions. For Figure 24 provides more experimental results by GLAGC. Table 1 shows the entropy, LOEshadow and areasAMBE in theperformance Buildings image of the in Figuredifferent 18. Our algorithms. method outperforms The proposed all the otheralgorithm LOEmethods; and AMBE the LIME performance and AGCWD of methods the different cannot restore algorithms. the colors The of the proposed dusk area; Lee’salgorithm achieves the maximum value of the average information entropy of the enhanced image, achievesmethod the caused maximum ripple distortionsvalue of the in theaverage sky area. information entropy of the enhanced image, which revealsFigure 22 that is the the comparison proposed for algorithm the Goddess can image. obtain Lee’s the method most abundant results in excessive image infor- mation.contrast In terms enhancement. of the preservation Overenhancement of natu isralness, produced the in proposed the face region method by has the LIMEthe lowest LOEand after AGCWD the AGCWD methods; method, furthermore, which the is AGCWDinferior to method ours. cannotRegarding remove the theAMBE, shadows the max- imumin the is background.achieved by The our SIRE method method for achieves dark illumination the best naturalness scenarios preservation, such as buildings, but the re- vealingcomputational its overall cost boosting is much on greater low thanillumination, that of our while method, the which minimum will be discussedis obtained later. in nor- Figure 23 shows the landscape image with pleasant visual effects, which are used to mal-exposure scenes (Landscape), which presents brightness maintenance. Table 2 shows thetest average the performance computational of avoiding costs of overexposure. the different The algorithms mountain under in the backgroundthe same computational becomes theblue average and losescomputational its original costs color of with the Lee’sdifferent method; algorithms oversaturation under the occurs same in computational the LIME conditions, and the image resolution is 512 × 512. It can be seen that the proposed algo- conditions,method;the and AGCWD the image and resolution SIRE methods is 512 and × our 512. method It can all be have seen good that results. the proposed algo- rithm canFigure achieve 24 provides good results more experimental in a short amount results by of GLAGC.time. Table1 shows the entropy, LOE and AMBE performance of the different algorithms. The proposed algorithm achieves the maximum value of the average information entropy of the enhanced image, which reveals that the proposed algorithm can obtain the most abundant image information. In terms of the preservation of naturalness, the proposed method has the lowest LOE after the AGCWD method, which is inferior to ours. Regarding the AMBE, the maximum is achieved by our method for dark illumination scenarios such as buildings, revealing its overall boosting on low illumination, while the minimum is obtained in normal-exposure scenes (Landscape), which presents brightness maintenance. Table2 shows the average computational costs of the different algorithms under the same computational conditions, and the image resolution is 512 × 512. It can be seen that the proposed algorithm can achieve good results in a short amount of time. In summary, by comparison experiments, the proposed method has good performance in low illumination enhancement, uneven illumination improvement and illumination maintenance. Lee’s method may cause ripple distortion and excessive contrast enhance- ment; the LIME method can handle the various illuminance conditions while the results will be oversaturated in some regions; the AGCWD method formulates the gamma mapping curve according to the histogram of the image without considering the spatial information, which results in degrading performance in uneven illumination images; and the SIRE method is relatively good. Nevertheless, its practicality is limited by time consumption. Sensors 2021, 21, x FOR PEER REVIEW 18 of 20

Table 1. Quantitative comparisons of different methods in terms of the entropy, LOE and AMBE.

Images Index LIME Lee’s Method AGCWD SIRE GLAGC Entropy 7.59 7.62 7.67 7.72 7.78 Urban LOE 204.36 171.13 39.70 29.05 118.32 AMBE 59.62 19.50 34.80 9.82 37.92 Entropy 7.09 7.77 7.67 7.83 7.76 Baby LOE 333.85 100.08 176.69 120.13 111.95 AMBE 49.18 3.83 25.78 15.78 14.98 Entropy 7.57 7.68 7.57 7.67 7.82 Street LOE 282.43 93.54 89.56 141.58 173.6 AMBE 56.03 15.48 24.46 18.29 39.15 Entropy 7.54 7.11 7.50 7.42 7.35 Building LOE 191.90 162.11 30.19 147.51 177.09 AMBE 49.29 41.17 43.97 41.98 64.93 Entropy 7.49 7.38 7.79 7.70 7.47 Goddess LOE 199.21 283.96 43.77 192.93 105.01 AMBE 72.12 19.35 44.42 34.17 43.87 Entropy 7.83 7.64 7.78 7.46 7.82 Landscape LOE 84.73 152.60 59.83 172.58 85.30 AMBE 14.41 18.60 16.32 33.83 9.33 Entropy 7.52 7.53 7.66 7.63 7.67 AVE. LOE 204.36 138.24 66.04 127.58 115.88 AMBE 50.111 19.66 31.62 27.31 35.03

Table 2. Average computational time (unit: seconds) of the different methods.

Lee’s Method LIME AGCWD SIRE Ours (GLAGC) 0.067 0.21 0.136 8.51 0.095

In summary, by comparison experiments, the proposed method has good perfor- mance in low illumination enhancement, uneven illumination improvement and illumi- nation maintenance. Lee’s method may cause ripple distortion and excessive contrast en- hancement; the LIME method can handle the various illuminance conditions while the results will be oversaturated in some regions; the AGCWD method formulates the gamma mapping curve according to the histogram of the image without considering the spatial Sensors 2021, 21, 845 information, which results in degrading performance in uneven illumination images;19 and of 21 the SIRE method is relatively good. Nevertheless, its practicality is limited by time con- sumption.

Figure 24. More results by our proposed GLAGC.

Table 1. Quantitative comparisons of different methods in terms of the entropy, LOE and AMBE.

Lee’s Images Index LIME AGCWD SIRE GLAGC Method Entropy 7.59 7.62 7.67 7.72 7.78 Urban LOE 204.36 171.13 39.70 29.05 118.32 AMBE 59.62 19.50 34.80 9.82 37.92 Entropy 7.09 7.77 7.67 7.83 7.76 Baby LOE 333.85 100.08 176.69 120.13 111.95 AMBE 49.18 3.83 25.78 15.78 14.98 Entropy 7.57 7.68 7.57 7.67 7.82 Street LOE 282.43 93.54 89.56 141.58 173.6 AMBE 56.03 15.48 24.46 18.29 39.15 Entropy 7.54 7.11 7.50 7.42 7.35 Building LOE 191.90 162.11 30.19 147.51 177.09 AMBE 49.29 41.17 43.97 41.98 64.93 Entropy 7.49 7.38 7.79 7.70 7.47 Goddess LOE 199.21 283.96 43.77 192.93 105.01 AMBE 72.12 19.35 44.42 34.17 43.87 Entropy 7.83 7.64 7.78 7.46 7.82 Landscape LOE 84.73 152.60 59.83 172.58 85.30 AMBE 14.41 18.60 16.32 33.83 9.33 Entropy 7.52 7.53 7.66 7.63 7.67 AVE. LOE 204.36 138.24 66.04 127.58 115.88 AMBE 50.111 19.66 31.62 27.31 35.03

Table 2. Average computational time (unit: seconds) of the different methods.

Lee’s Method LIME AGCWD SIRE Ours (GLAGC) 0.067 0.21 0.136 8.51 0.095

5. Conclusions In this article, we propose an adaptive image illumination perception and correction algorithm in the wavelet domain. We use the wavelet transform to obtain features of illuminance, and then the creative global statistical illuminance features and local spa- tial illuminance features are proposed as the foundation of perceived illuminance. An adaptive dual-gamma correction function is carried out accordingly; moreover, the edge smoothness is retained by adaptive adjustment. In addition, the proposed stabilization factor can suppress low-illumination noise. It is verified by comparative experiments that the adaptability, preservation of naturalness and efficiency of this algorithm on different images are improved compared with previous state-of-the-art methods. In addition to image enhancement, for a certain , our algorithm is promising for automatically providing an appropriate gamma factor through learning only several captured images. Sensors 2021, 21, 845 20 of 21

Author Contributions: Conceptualization, H.S.; methodology, W.Y.; software, H.Y. and D.L.; val- idation, G.L. and H.S.; formal analysis, H.Y. and D.L.; investigation, W.Y., H.Y., D.L. and H.S.; writing—original draft preparation, W.Y.; writing—review and editing, D.L.; visualization, W.Y., H.S., H.Y. and G.L.; project administration W.Y. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by NATIONAL NATURAL SCIENCE FOUNDATION OF CHINA, grant number 51775214. Data Availability Statement: Data sharing is not applicable. No new data were created or analyzed in this study. Conflicts of Interest: The authors declare no conflict of interest.

References 1. Srinivas, K.; Bhandari, A.K. Low light image enhancement with adaptive sigmoid transfer function. IET Image Process 2020, 14, 668–678. [CrossRef] 2. Xiaojie, G.; Yu, L.; Haibin, L. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. 3. Fan, C.-N.; Zhang, F.-Y. Homomorphic filtering based illumination normalization method for face recognition. Pattern Recognit. Lett. 2011, 32, 1468–1479. [CrossRef] 4. Land, E.H.; McCann, J.J. Lightness and the Retinex theory. J. Opt. Soc. Amer. 1971, 61, 1–11. [CrossRef] 5. Land, E.H. The Retinex theory of color vision. Sci. Amer. 1977, 237, 108–128. [CrossRef] 6. Jobson, D.J.; Rahman, Z.; Woodell, G.A. Properties and performance of the center/surround Retinex. IEEE Trans. Image Process 1997, 6, 451–462. [CrossRef] 7. Rahman, Z.; Jobson, D.J.; Woodell, G.A. Multi-scale Retinex for color image enhancement. Proc. ICIP 1996, 3, 1003–1006. [CrossRef] 8. Yue, H.; Yang, J.; Sun, X.; Wu, F.; Hou, C. Contrast enhancement based on intrinsic image decomposition. IEEE Trans. Image Process 2017, 26, 3981–3994. [CrossRef] 9. Celik, T.; Tjahjadi, T. Image Resolution Enhancement Using Dual-Tree Complex Wavelet Transform. IEEE Geosci. Remote Sens. Lett. 2010, 7.[CrossRef] 10. Glenn, R.; Easley, L.D.; Colonna, F. Shearlet-Based Total Variation Diffusion for Denoising. IEEE Trans. Image Process 2009, 18, 260–268. 11. Karel, Z. Contrast limited adaptive histograph equalization. In Graphic Gems IV; Academic: San Diego, CA, USA, 1994; pp. 474–485. 12. Kim, Y.-T. Contrast enhancement using brightness preserving bi—histogram equalization. IEEE Trans. Consum. Electron. 1997, 43, 1–8. 13. Clement, J.C.; Parbukunmar, M.; Basker, A. Color image enhancement in compressed DCT domain. ICGST GVIP J. 2010, 10, 31–38. 14. Grigoryan, A.M.; Jenkinson, J.; Agaian, S.S. Quaternion Fourier transform based alpha-rooting method for color image measure- ment and enhancement. Signal Process. 2015, 109, 269–289. [CrossRef] 15. Huang, Z.; Zhang, T.; Li, Q.; Fang, H. Adaptive gamma correction based on cumulative histogram for enhancing near-infrared images. Infrared Phys. Technol. 2016, 79, 205–215. [CrossRef] 16. Cao, G.; Huang, L.; Tian, H.; Huang, X.; Wang, Y.; Zhi, R. Contrast enhancement of brightness-distorted images by improved adaptive gamma correction. Comput. Electr. Eng. 2018, 66, 569–582. [CrossRef] 17. Huang, S.C.; Cheng, F.C.; Chiu, Y.S. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process 2013, 22, 1032–1041. [CrossRef] 18. Lee, S.; Kwon, H.; Han, H.; Lee, G.; Kang, B. A space-variant luminance map based color image enhancement. IEEE Trans. Consum. Electron. 2010, 56, 2636–2643. [CrossRef] 19. Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A weighted variational model for simultaneous’ reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2782–2790. 20. Wang, L.; Xiao, L.; Liu, H.; Wei, Z. Variational Bayesian Method for Retinex. IEEE Trans. Image Process 2014, 23, 3381–3396. [CrossRef] 21. Loza, D.; Bull, A.; Achim, A.M. Automatic contrast enhancement of low-light images based on local statistics of wavelet coefficients. Proc. ICIP 2010, 3553–3556. 22. Zotin, A. Fast Algorithm of Image Enhancement based on Multi-Scale Retinex. Procedia Comput. Sci. 2018, 131, 6–14. [CrossRef] 23. Jung, C.; Yang, Q.; Sun, T.; Fu, Q.; Song, H. Low light image enhancement with dual-tree complex wavelet transform. J. Vis. Commun. Image Represent. 2017, 42, 28–36. [CrossRef] Sensors 2021, 21, 845 21 of 21

24. Ye, Z.; Mohamadian, H.; Ye, Y. Information Measures for Biometric Identification Via 2D Discrete Wavelet Transform. In Proceedings of the 2007 IEEE International Conference on Automation Science and Engineering, Scottsdale, AZ, USA, 22–25 September 2007. 25. Hou, X.; Zhang, L. Saliency Detection: A Spectral Residual Approach. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007. 26. Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform ilumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [CrossRef][PubMed] 27. Rajavel, P. Image Dependent Brightness Preserving Histogram Equalization. IEEE Trans. Consum. Electron. 2010, 56, 756–763. [CrossRef]