IEEE Access, vol. 5, 2017, pp. 1-1.

Dark image enhancement using perceptual color transfer.

Cepeda-Negrete, J., Sanchez-Yanez, RE., Correa-Tome, Fernando E y Lizarraga-Morales, Rocio A.

Cita: Cepeda-Negrete, J., Sanchez-Yanez, RE., Correa-Tome, Fernando E y Lizarraga-Morales, Rocio A (2017). Dark image enhancement using perceptual color transfer. IEEE Access, 5 1-1.

Dirección estable: https://www.aacademica.org/jcepedanegrete/14

Esta obra está bajo una licencia de Creative Commons. Para ver una copia de esta licencia, visite http://creativecommons.org/licenses/by-nc-nd/4.0/deed.es.

Acta Académica es un proyecto académico sin fines de lucro enmarcado en la iniciativa de acceso abierto. Acta Académica fue creado para facilitar a investigadores de todo el mundo el compartir su producción académica. Para crear un perfil gratuitamente o acceder a otros trabajos visite: http://www.aacademica.org. Received September 21, 2017, accepted October 13, 2017. Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.

Digital Object Identifier 10.1109/ACCESS.2017.2763898

Dark Image Enhancement Using Perceptual Color Transfer

JONATHAN CEPEDA-NEGRETE 1, RAUL E. SANCHEZ-YANEZ 2, (Member, IEEE), FERNANDO E. CORREA-TOME2, AND ROCIO A. LIZARRAGA-MORALES3, (Member, IEEE) AQ:1 1Department of Agricultural Engineering, University of Guanajuato DICIVA, Guanajuato 36500 , Mexico 2Department of Electronics Engineering, University of Guanajuato DICIS, Guanajuato 36500, Mexico 3Department of Multidisciplinary Studies, University of Guanajuato DICIS, Guanajuato 36500, Mexico Corresponding author: Raul E. Sanchez-Yanez ([email protected]) The work of J. Cepeda-Negrete was supported by the Mexican National Council on Science and Technology (CONACyT) through the Scholarship 290747 under Grant 388681/254884.

1 ABSTRACT In this paper, we introduce an image enhancing approach for transforming dark images into 2 lightened scenes, and we evaluate such method in different perceptual color spaces, in order to find the 3 best-suited for this particular task. Specifically, we use a classical color transfer method where we obtain 4 first-order statistics from a target image and transfer them to a dark input, modifying its hue and brightness. 5 Two aspects are particular to this paper, the application of color transfer on dark imagery and in the search 6 for the best for the application. In this regard, the tests performed show an accurate transference 7 of colors when using perceptual color spaces, being RLAB the best color space for the procedure. Our 8 results show that the methodology presented in this paper can be a good alternative to low-light or night 9 vision processing techniques. Besides, the proposed method has a low computational complexity, property 10 that is important for real time applications or for low-resource systems. This method can be used as a 11 preprocessing step in order to improve the recognition and interpretation of dark imagery in a wide range of 12 applications.

13 INDEX TERMS Perceptual color space, color transfer, dark image enhancement, night vision.

14 I. INTRODUCTION representation may facilitate the recognition of night vision 34 15 Image enhancement is a challenging task in the image pro- imagery and its interpretation [8]. 35 16 cessing field. The objective of image enhancement algorithms Most of classical methods that have been used over decades 36 17 is to improve the quality and to ease the visibility of a given for dark image enhancement are histogram-based; for exam- 37 18 image, e.g. an image with noise, an image with low con- ple, histogram stretching, histogram equalization, brightness 38 19 trast, or a dark image. With such improvement, images are preserving bi-histogram equalization [11], contrast limited 39 20 more useful in application fields like human understanding, adaptive histogram equalization [12], [13], to mention a 40 21 digital photography [1], medical image analysis [2], object few. The aforementioned methods have been categorized as 41 22 detection [3], face recognition [4], and video surveillance [5]. direct and indirect methods [14]. Direct methods consist of 42 23 Dark images, or images taken under low light conditions, improving image contrast by optimizing an objective contrast 43 24 are problematic because of their narrow dynamic range. measure. Indirect methods exploit the dynamic range without 44 25 Under these conditions, a regularIEEE camera sensor introduces usingProof a contrast measure. However, the performance of such 45 26 significant amounts of noise, further reducing the information histogram-based algorithms is very limited with color images 46 27 content in the image. Because of these limitations, dark image because these methods change the correlation between the 47 28 enhancement algorithms occasionally produce artifacts in color components of the original scene. Most recently, a num- 48 29 the processed images [6]. One traditional approach to dark ber of methods for improving the contrast using a global 49 30 image enhancement is to use monochromatic representations mapping from feature analysis have been published specially 50 31 and ignore the color features. However, color images have oriented to video enhancement [15], [16]. 51 32 numerous benefits over monochromatic images for surveil- Color constancy approaches are also used to increase the 52 33 lance and security applications [7]–[10]. Moreover, a color overall luminance in the image. Although color constancy 53

2169-3536 2017 IEEE. Translations and content mining are permitted for academic research only. VOLUME 5, 2017 Personal use is also permitted, but republication/redistribution requires IEEE permission. 1 See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. J. Cepeda-Negrete et al.: Dark Image Enhancement Using Perceptual Color Transfer

54 algorithms have been originally developed to estimate the In this work, we propose the applicability of a well- 110 55 color of a light source by discarding the illuminant from the known statistical method for color transfer [28], now using 111 56 scene, they also improve the chromatic content [17]. Some the color mapping to transform dark images into daylight- 112 57 works have explored the use of color constancy algorithms for looking images. Few previous works already have studied 113 58 color image enhancement purposes [18], [19]. In particular, night-time imagery enhancement. However, images used in 114 59 [18] was oriented to local contrast enhancement using the those studies are professional landscapes with a controlled 115 60 White-Patch and the Gray-World algorithms in combination exposure [39], [40], or they were created by the fusion of 116 61 with an automatic color equalization technique. In this work, images from different spectral or color bands [8], [41]–[43]. 117 62 and just for comparison purposes, we have included the two Our approach uses only a single image which is completely 118 63 aforementioned color constancy algorithms to enhance dark dark (without adjusted exposure), and obtained from a com- 119 64 images. mon RGB-CCD sensor. Furthermore, most of the color trans- 120 65 Image fusion is another approach used to enhance dark fer research has been performed using the lαβ color space; 121 66 images. This technique increases the visual information in few studies have been focused on other perceptual spaces and 122 67 an image by combining different bands or images into the none on RLAB space. In this study, we propose to apply the 123 68 RGB space [9], [10], [20]. In image fusion, generally two color transfer using RGB and four perceptual color spaces: 124 69 monochromatic images from different spectral bands are lαβ, CIELUV, CIELAB and RLAB. The latter color space, 125 70 used. A near-infrared or a visible image is considered as designed specifically to emulate the human perception of 126 71 the R component, and a thermal image is designated as color under extreme light conditions [44]. Dark or night 127 72 the G component [8], [21]. This combination of bands is imagery can be included in such kind of conditions. The per- 128 73 used to build a look-up table (LUT) to transfer colors to formance of the color transfer using all color spaces is com- 129 74 other images. However, this scheme may produce images pared. Additionally, we include in the comparison the results 130 75 with false colors (i.e. colors that are not actually in the yielded by image enhancement methods used as reference. 131 76 scene). These false colors could also diminish the scene Tests are conducted in order to support our hypothesis: better 132 77 comprehension [22], [23]. results are obtained when the color transfer is performed on 133 78 Color transfer (a.k.a. color mapping) is an efficient the RLAB space. 134 79 approach to enhance an image under low light conditions The rest of this article is organized as follows. The pro- 135 80 avoiding false colors. This technique recolors an input image posed framework is presented in Section II, including the 136 81 by transferring the color content from another image used description of the color spaces considered, the formula- 137 82 as a reference (target). There are three main strategies used tion of the color transfer function, and the method used 138 83 for color transfer between images: geometric-based methods, to asses the correct color transfer. Experimental results are 139 84 user-aided solutions, and statistical approaches. In geometric- discussed in Section III, followed by the concluding remarks 140 85 based methods [24], the transfer of aspects of color rendition in Section IV. 141 86 from one image to another can be facilitated by searching 87 for corresponding features that are depicted in both images. II. METHODOLOGY 142 88 By actively finding correspondences between pairs of images, This section introduces the perceptual color transfer (PCT) 143 89 the color transfer algorithm can better ensure that features that technique used in this work to transform a dark image into a 144 90 occur in both images end up having the same colors. When lightened scene. We use a classical color transfer technique 145 91 the structure and content of the input image is very different to perform this transformation [28]. Although there are many 146 92 from the target image, many automatic methods will fail to color transfer approaches, this method was chosen because of 147 93 find a successful mapping. In such cases, it may be required its simplicity and speed. Figure 1 shows the color transfer pro- 148 94 the application of an input from the user in order to guide cedure performed using this technique. Notice that the images 149 95 the correspondences between the source and reference; these used in this study are obtained from a common RGB-CCD 150 96 methods are referred to as user-aided solutions [25]–[27]. sensor. The details regarding the methodology are described 151 97 When direct correspondences between image features are not in the following subsections. 152 98 available, approaches using statistical properties are often 99 used to define a mapping between the two images [28]–[34]. A. COLOR TRANSFER USING FIRST ORDER STATISTICS 153 100 Concerning the problem of falseIEEE colors, different studies have A colorProof transfer method aims to modify the color content of 154 101 addressed the correction of unreal appearance using color a given image by transferring the statistics from a reference 155 102 transfer techniques [8], [22], [35], [36]. Such works have image. In this study, we use a classical method, proposed by 156 103 been specifically oriented to tasks like scene segmentation Reinhard et al. [28]. In this method, only the global mean and 157 104 and classification [37], [38]. If designed properly, the color the standard deviation in the image are calculated. The aim of 158 105 transfer applied to dark imagery improves the ability of an our work is to transform an input image (dark) into another 159 106 observer to understand a scene [37]. Notice that most of with a look similar to that of the reference image (target). 160 107 color transfer methods were developed using the lαβ color Specifically, modifying the color content of the dark image 161 108 space, remaining unstudied the performance using other color using the statistics from the target image. This procedure 162 109 spaces. may be improved by using a different color space. After a 163

2 VOLUME 5, 2017 J. Cepeda-Negrete et al.: Dark Image Enhancement Using Perceptual Color Transfer

CIELAB color space is also recommended for color transfer 193 using natural low-light images. 194 In this work, we explore the usage of color transfer for 195 completely dark images. Therefore, we performed tests using 196 five color spaces. The color spaces used in this study are: 197 RGB, lαβ, CIELUV,CIELAB and RLAB. Other color spaces 198 were also considered. However, preliminary results showed 199 that those spaces are not adequate for our application. The 200 Table 1 shows the image components with their correspond- 201 ing index i (see Eqs. (1)-(4)) for each color space used. 202 FIGURE 1. Color transfer procedure upon a dark image.

TABLE 1. The components corresponding to the index i according to each color space. 164 conversion of both images from RGB to that color space, 165 the statistics are calculated for each color channel and for both 166 images, the dark and the target one. The mean and standard 167 deviation are calculated using Eqs. (1)–(4).

MD ND D 1 168 µi Di(x, y), (1) = MDND x 1 y 1 X= X= RGB is the first space that has been included for compari- 203 MT NT T 1 son purposes. The second color space considered is the lαβ, 204 169 µi Ti(x, y), (2) = MT NT inspired from previous studies [23], [28]. The third is the CIE 205 x 1 y 1 X= X= 1976 (L, u∗, v∗) color space, commonly known as CIELUV, 206 MD ND and the fourth is the CIE 1976 (L, a∗, b∗) color space, better 207 1 2 D D known as CIELAB. For these later two spaces, the Euclidean 208 170 σi v Di(x, y) µi , (3) = MDND − u x 1 y 1 distance between two points in the space is proportionally 209 u X= X= u  uniform to the perceptual difference of the corresponding 210 t MT NT colors at the points. Finally, the fifth space under evaluation 211 T 1 T 2 171 σi v Ti(x, y) µi , (4) is the RLAB, which was originally designed in order to 212 = uMT NT − u x 1 y 1 fix the problems shown by CIELAB, under unusual light- 213 u X= X=  t ing conditions [44]. RLAB maintains perceptual properties 214 172 µ σ i where i and i are the mean and standard deviation, is under normal light conditions (natural light), and also under 215 173 M N the channel index, is the number of rows, and is the extreme conditions. Dark-time imagery are an example of 216 174 D T number of columns of the image. Here, the signals and such extreme cases. 217 175 correspond to the dark and target images, respectively. The procedure to convert an image from RGB to RLAB is 218 176 The color transfer between the target and the input images included here for clarity sake. To transform an RGB image 219 177 i for the channel is performed using the Eq. (5) to perceptual color spaces, the data are first transformed to 220 T the CIEXYZ color space [47]. In order to transform an image 221 σi D T 178 Oi(x, y) (Di(x, y) µ ) µ , (5) from RGB into CIEXYZ, the RGB space needs to be defined. 222 = σ D − i + i i Here, sRGB is used because it is based in a colorimetric 223

224 179 where O represents the output image in the transfer. Finally, RGB calibrated space [48]. The Eq. (6) is used to perform 225 180 we transform the O image back to RGB using the inverse the transformation 181 transformation. X 0.4124 0.3576 0.1805 r 182 Notice that for a given image, the color components are Y 0.2126 0.7152 0.0722 g , (6) 226 183 processed separately. If a color space different than RGB is  Z  =  0.0193 0.1192 0.9505   b  184 used, the RGB image needs to be transformed to that color IEEE Proof      185 space before performing the color transfer. Then, the image where r, g, b [0, 1], obtained by dividing each R, G, B 227 186 ∈ is transformed back to RGB to display the results. component by 255. After, the main equations used to 228 obtain the transformation from XYZ to RLAB are given in 229 B. COLOR TRANSFER IN THE RLAB PERCEPTUAL 187 Eqs. (8)-(11). For further details, please refer to the work of 230 COLOR SPACE 188 Fairchild [44]. 231 189 The classical color transfer was originally applied in the 190 lαβ color space, and several works have adopted this 1.0020 0.0401 0.0084 − 191 approach [22], [45], [46]. Reinhard and Pouli [39] performed RAM 0.0042 0.9666 0.0008 , (7) 232 =  −  192 a comparison of color spaces, finding that the use of the 0.0000 0.0000 0.9110   VOLUME 5, 2017 3 J. Cepeda-Negrete et al.: Dark Image Enhancement Using Perceptual Color Transfer

Xref X where ho and ht are the normalized color histograms from 269 233 Yref RAM Y , (8)   =   the output image, and from the target image, respectively. For 270 Zref Z the euclidean, Bhattacharyya and chi-square (dL2 , dB, dchi s) 271 R σ − 234  L 100(Yref) , (9) distances, a small distance value corresponds to a better color 272 = R σ σ transfer. In comparison tests, the intersection (d ) measure is 273 235 a 430[(Xref ) (Yref ) ], (10) ∩ R = σ − σ also considered, 274 236 b 170[(Yref ) (Zref ) ]. (11) = − d (ho, ht ) min(ho(j), ht (j)), (20) 275 237 In this study, σ 1/3.5 is used. This value is suggested ∩ = = j 238 for images under very low luminance conditions [44]. The X 239 inverse transformation required to go back to XYZ is given where the higher the value, the better the color transfer is. 276 240 by the following equations /σ III. EXPERIMENTAL RESULTS 277 LR 1 The experiments were performed using different approaches 278 241 Yref , (12) = 100 for the enhancement of a dark image given as input to our 279   1/σ R system. On one hand and for comparison purposes, three 280 a σ 242 Xref (Yref ) , (13) = 430 + methods are used to enhance this input without the need of 281    /σ any reference image. These methods are the White Patch 282 bR 1 σ algorithm (WP), the Gray-World algorithm (GW), and the 283 243 Zref (Yref ) , (14) = − 170 Histogram Equalization (HE). On the other hand, outcomes 284    X Xref are obtained using the color transfer from a specific target 285 1 244 Y (RAM)− Yref . (15) image into the same input. This latter procedure is made in 286   =   Z Zref RGB and each one of the perceptual color spaces under dis- 287 cussion: lαβ, CIELUV,CIELAB and RLAB. Figure 2 depicts 288 245 Finally, the inverse  transformation, from CIEXYZ to RGB is the proposed methodology comparing different approaches. 289 246 given in Eq. (16) It is important to mention that additional experiments were 290 r 3.2410 1.5374 0.4986 X − − performed using other non perceptual color spaces, HSI, YIQ, 291 247 g 0.9692 1.8760 0.0416 Y . (16) YCbCr and the opponent color space (O1O2O3). However, 292  b  =  −0.0556 0.2040 1.0570   Z  − we obtained poor results using these spaces and for that 293 248 The  equations for the color spaces lαβ, CIELUV  and reason the corresponding results are not reported in this work. 294 249 CIELAB can be consulted in the Appendix section. As far as we know, there are no reference databases for this 295 particular purpose (images under total darkness), hence the 296

250 C. ASSESSMENT OF THE COLOR TRANSFER experiments are carried out in two ways. Firstly, we provide a 297

251 An important problem regarding to image processing dark imagery dataset consisting of 200 RGB images, obtained 298 252 methodologies is the comparison of images. When different using an off-the-shelf camera. Secondly, we transform the 299 253 algorithms are applied to an image, an objective measure is well-known BSDS300 database [51] into a night-time image 300 254 necessary to compare the outcomes. In this study, we use a set using the methodology proposed by Thompson et al. [52]. 301 255 metric for assessing the quality of the image, by calculating Both sets of images are available in [53] and [54]. 302 256 the similarity between the outcome and the target image. A. EXPERIMENT 1: NATURAL NIGHT-TIME IMAGERY 257 These distance metrics have already been used for quality 303 258 assessment in a previous work [49]. As a first experiment, we propose a collection of dark images 304 259 The comparison measure uses the histograms of the taken with an off-the-shelf camera. The image database used 305 260 images (a histogram with 255 bins for each component), consists of a collection of 200 dark images. The image set 306 261 calculating the distance between them. To corroborate the was taken under dark light conditions or under the moon- 307 262 consistency of the comparisons, we tested three different light. Additionally, we chose 10 target images from the 308 263 distances between histograms of the outcome and target BSDS300 database: 2092, 35010, 35058, 95006, 100080, 309

310 264 images: euclidean (dL2 ), Bhattacharyya (dB) [50] and chi- 108005, 113044, 124084, 143090 and 232038. This selec- 265 square (dchi s). tion was made arbitrarily according to the variation in their 311 − IEEE Proof color content. The experiment consists of the color transfer 312 2 266 dL (ho, ht ) (ho(j) ht (j)) , (17) between the darkened image and its corresponding original 313 2 = − sXj natural scene using the aforementioned color spaces. Addi- 314 tionally, WP, GW and the HE are used as reference methods. 315 1 267 dB(ho, ht ) 1 ho(j) ht (j), (18) An example out of the 200 dark images, from this first exper- 316 = v − µ µ N 2 · u ho ht j iment, is depicted in Figure 3. Visually we can appreciate that 317 u X p 2 the best outcomes are obtained using perceptual color spaces. 318 t (hop(j) ht (j)) 268 dchi s(ho, ht ) − , (19) However, it is important to analyze numerically the metrics in 319 − = h (j) o order to determine the best one. A test series was performed 320 Xj

4 VOLUME 5, 2017 J. Cepeda-Negrete et al.: Dark Image Enhancement Using Perceptual Color Transfer

Although these comments apply only for the particular 333 example, further exhaustive experimentation was done in 334 order to obtain general conclusions. Such conclusions will be 335 given from the reproduction of each input-target pair using 336 different target images. Figure 4 shows an example of the 337 outcomes obtained from a single input image after applying 338 six different targets. The color transfer is made in the RLAB 339 color space. The figure shows some target images highly 340 dissimilar to the input scene, leading to the generation of false 341 colors in the outcomes. Although this feature could be useful 342 in particular applications (e.g. generation of artistic effects), 343 it is not desirable for our purposes. 344 The color transfer was applied to the 200 dark images, 345 using for each image all the 10 targets. The distance mea- 346 sures were computed for all the outcome-target pairs. A total 347 of 2000 outcomes were obtained for each color transfer 348 method and for each color space,2000measures for each refer- 349 AQ:2 ence method. Afterward, the mean value from the 2000 mea- 350 sures was computed for each approach under evaluation. 351 In Table 3, the cells show the mean value for each approach, 352 and for each distance measure and, we can appreciate that the 353 PCT in the RLAB space is the best approach for the whole set 354 of images. 355

TABLE 3. Mean values from the 2000 measures for each method under analysis. Data are given for the four metrics of distance between histograms.

FIGURE 2. Diagram of the methodology followed for performance comparison.

TABLE 2. As an example of distance measures between the target (Figure 3b) and the outcomes after processing the input (Figure 3-a) using the different approaches (Figure 3-(c-j)).

B. EXPERIMENT 2: NIGHT BSDS300 DATASET 356

In this experiment, we transform the widely known 357 BSDS300 database [51] into a night-time image set using the 358 framework proposed by Thompson et al. [52]. This method- 359 ology emulates loss-of-detail and noisy effects associated 360 with night vision assuming the input as an RGB image. 361 If our source image is a colorful image, first a dark tone 362 should be mapped. Thus, the image is mapped to a scene 363 321 measuring the distance between two histograms. A histogram with night lighting, where each pixel is a single number 364 322 corresponding to the original natural image and the other to indicating the ‘‘brightness’’ of the pixel as seen at night. 365 323 the outcome. IEEE ThisProof will tend to bluish hue because the rods in the human 366 324 Continuing with the same example. The Table 2 presents eye are more sensitive to blues than to greens and reds [52]. 367 325 the results obtained from the image No. 92 shown in Figure 3. Afterwards, a filtering stage is carried out in order to simulate 368 326 Each cell in this table shows the comparison value between loss of visual acuity. One way of achieving this, is using an 369 327 the outcome and the target image given in Figure 3b. Each anisotropic diffusion, which has the effect of smoothing the 370 328 value is obtained using an enhancement method and a spe- geometry of edges without blurring across the edges. Finally, 371 329 cific distance metric. This table shows that without excep- an amount of Gaussian noise is applied to the images because 372 330 tion, the best values are obtained using RLAB. Additionally, an actual night image presents overly noise depending the 373 331 we can appreciate that the color transfer in RGB is worse than sensor. Figure 5 shows three examples of the transformed 374 332 the results obtained using perceptual color spaces. BSDS300 dataset. 375

VOLUME 5, 2017 5 J. Cepeda-Negrete et al.: Dark Image Enhancement Using Perceptual Color Transfer

FIGURE 3. A sample out of the 200 dark images and its enhanced outcomes obtained using different methods. In (a) the input no. 92 (landscape); (b) the target image BSDS300 No. 2092; outcomes using (c) WP, (d) GW, and (e) Histogram equalization; outcomes using (b) as target and (f) CT in RGB, (g) PCT in lαβ, (h) PCT in CIELUV, (i) PCT in CIELAB, and (j) PCT in RLAB.

IEEE Proof

FIGURE 4. Examples of the corresponding outcomes of the color transfer using a different target image.

376 In this second experiment the color transfer was applied to from this second experiment, is depicted in Figure 6. Simi- 379 377 the 300 darkened images, using their corresponding original larly to the first experiment, a test series was carried out for 380 378 scene as target. A sample out of the 300 darkened images, obtaining the four distances. Table 4 provides the quantitative 381

6 VOLUME 5, 2017 J. Cepeda-Negrete et al.: Dark Image Enhancement Using Perceptual Color Transfer

TABLE 5. Mean values from the 300 measures for each method under analysis. Data are given for the four metrics of distance between histograms.

TABLE 6. Comparison of the PSNR average for each approach in the whole BSD300 dataset. PCTRLAB obtained the highest value, meaning that this approach produces less noise than the other.

FIGURE 5. Three samples of the new darkened BSDS300 dataset: 2092, 14037 and 65010 images, respectively.

We have found that applying the color transfer method- 401 TABLE 4. As an example, distance measures between the original image ology in a perceptual color space is appreciably better than 402 65010 (Figure 6b) and the outcomes after processing the darkened applying it in the RGB color space. From all the methods, 403 input (Figure 6a) using the different approaches (Figure 6c-j). color transfer using RLAB always attains the best results, for 404 each distance metrics used. Additionally to this test series, 405 we performed a statistical significance z-test between the 406 results obtained using the RLAB and the CIELAB spaces, 407 finding that the difference is significant using 95% as confi- 408 dence level. We may conclude that the color transfer in the 409 RLAB color space is the best choice for the enhancement of 410 dark images given a target color content. 411

C. COMMENTS ON NOISE ISSUES 412

382 results, obtained from the sample image shown in Figure 6. Although this work is focused on the emulation of colors and 413 383 This table shows that according to euclidean, Bhattacharyya the assessment of this task on several color spaces, we need 414 384 and intersection distances, the best values are obtained using to take into account that the enhancement of dark images also 415 385 RLAB. Only the chi-s distance shows that the method using amplifies the noise existent on them. Here we include a brief 416 386 CIELAB is the best, however using RLAB the value is just discussion in this regard. 417 387 marginally lower. In an additional test of noise reduction, we measured the 418 388 The distance measures were computed for all the outcome- PSNR (Peak Signal to Noise Ratio) in those images generated 419 389 target pairs. A total of 300 outcomes were obtained for each in the Experiment 2. We used this dataset because the PSNR 420 390 color transfer method and for each color space (RGB, lαβ, measure between the different outcomes and the original 421 391 CIELUV, CIELAB or RLAB).IEEE Additionally, the outcomes sceneProof are computed for comparative purposes. Such original 422 392 from the reference methods were compared with the cor- image is the one in daylight, before the darkening, blur and 423 393 responding target image, obtaining 300 measures for each noise addition procedures. 424 394 reference method (WP, GW and histogram equalization). Noticing that the higher the PSNR value is the better 425 395 Afterward, the mean value from the 300 measures was calcu- the quality of the lightening procedure is, in Table 6 we 426 396 lated for each approach under evaluation. In Table 5, the cells can find the averages of the PNSR for each approach over 427 397 show the mean value for each approach, and for each distance the whole set of 300 images. It is possible to appreciate 428 398 measure. Consistently, the mean values show that the color that the best value corresponds to the PCT approach in the 429 399 transfer using the RLAB space is our best option in order to RLAB space. A qualitative example for an image out of 430 400 obtain the best mapping. the 300 is given in Figure 6, showing the particular PSNR 431

VOLUME 5, 2017 7 J. Cepeda-Negrete et al.: Dark Image Enhancement Using Perceptual Color Transfer

FIGURE 6. A sample out of the 300 images from the new darkened BSDS300 dataset and enhanced outcomes obtained using different methods. In (a) the night image 65010; (b) its corresponding original image; outcomes using (c) WP, (d) GW, and (e) Histogram equalization; outcomes using (b) as target and (f) CT in RGB, (g) PCT in lαβ, (h) PCT in CIELUV, (i) PCT in CIELAB, and (j) PCT in RLAB. PSNR value is also included in the box of each outcome representing the noise level presented.

432 results for this sample. In this case, and in accordance with 433 the average results over the whole set, the PT in RLAB 434 yields the best PSNR value. In general, we can say that 435 PCT in the RLAB space produces the best natural color 436 transfer and, in a collateral way, also reduces the presence of 437 noise. 438 If additional noise reduction is required, a number of fil- 439 ters, ranging from the basic mean and median to more specific 440 ones [55], can be applied after our approach, improving this 441 way the look of the image. Figure 7 shows the two samples 442 used as qualitative examples of our experiments. The refer- FIGURE 7. The two examples used in our experiments. (a) Original 443 ence (original) images are presented in (a) and outcomes of images. (b) Outcomes from the color transfer in RLAB space. (c) The same 444 the color transfer in RLAB are depicted in (b); finally, in (c) outcomes after a filtering stage. Additionally, PSNR value of the comparison between the original and the outcome images is included. 445 are shown outcomes from (b) after using successively two 446 3 3 filters, first a mean filter and then a median one. In the × 447 figure we can appreciate that the filtered outcomes attain a 448 higher PSNR value. color transfer in RGB. Besides, the color transfer applied to 459 images in the RLAB space attains the best results. We have 460 IV. CONCLUDING REMARKS 449 IEEE alsoProof proposed a dataset of night-time imagery, as benchmark 461 450 In this study, we have discussed an image processing-based for future studies in the field and, the modification of another 462 451 approach to transform dark-time imagery into scenes with a widely known dataset artificially darkened. This study may 463 452 daylight appearance. This approach uses a single input color be applied to improve the recognition and interpretation of 464 453 image, captured by an off-the-shelf camera. Our main contri- night-time imagery, in tasks such as video surveillance. More- 465 454 bution is the use of color transfer in a new way to lighten dark over, we found that the PCT in the RLAB space produces 466 455 images, taking advantage of the property of the color transfer the best natural color transfer and, at the same time, reduces 467 456 methodologies to diminish the production of unnatural colors. the presence of noise. In future, this alternative approach to 468 457 The experimental results show that, in general, the color traditional night-vision methods could also be implemented 469 458 transfer in perceptual spaces yields better results than the in mobile applications. 470

8 VOLUME 5, 2017 J. Cepeda-Negrete et al.: Dark Image Enhancement Using Perceptual Color Transfer

471 APPENDIX study, we use the illuminant E (Xn 1, Yn 1 and Zn 1) as 503 = = = 472 Equation sets for transforming coordinates from RGB to a reference white. Reinhard and Pouli [39] compared various 504 473 perceptual color spaces and backward. reference whites and concluded that, the illuminant E is the 505 best-suited for color transfer using perceptual spaces. The L∗, 506 A. l COLOR SPACE 474 αβ u∗ and v∗ components are computed applying the following 507 475 Firstly, the data is transformed to an intermediate color space, equations 508 476 the LMS. (2/σ)3Y /Y if Y /Y σ 3 L 0.3811 0.5783 0.0402 R n n L∗ ≤ (27) 509  3 = 116(Y /Yn) 16 otherwise, 477  M   0.1967 0.7244 0.0782   G  (21)  = −  S   0.0241 0.1288 0.8444   B  u∗ 13L∗(u′ un′ ), (28) 510       = − 478 The data in the LMS color space show a large  amount v∗ 13L∗(v′ vn′ ), (29) 511 479 of skewness, which we can largely eliminate by converting = − 480 data to a logarithmic space L logL, M logM and where σ 6/29. For the inverse transformation from 512 = = = 481 S logS [56]. The equation used for the transformation of CIELUV to the CIEXYZ, the following equations are used. 513 = 482 the LMS to the lαβ space is u∗ 1 u′ un′ , (30) 514 0 0 = 13L∗ + l √3 1 1 1 L v  1  ∗ v′ vn′ , (31) 515 483 α 0 0 1 1 2 M , = 13L +   = √6  −    ∗ β    1  1 1 0 S Y L (σ/2)3 if L 8    0 0   −    n ∗ ∗  √2      ≤   Y 3 (32) 516   =  L∗ 16 484 (22) Yn + otherwise,  116   485 where the l axis corresponds to the luminance channel, and  9u 486  ′ the α and β channels are chromatic yellow-blue and red-green X Y , (33) 517 = 4v 487 opponent channels, respectively.  ′  488 The corresponding inverse transformations are given next. 12 3u′ 20v′ Z Y − − . (34) 518 √3 = 4v 0 0  ′  L 1 1 1 3 l The CIELAB color space is computed from CIEXYZ using 519  √6  489  M   1 1 1  0 0  α , Eqs. (35)–(38). 520 = −  6  S 1 2 0  √  β    −   2    L∗ 116f (Y /Yn) 16, (35) 521      0 0    = −  2    a∗ 500 [f (X/Xn) f (Y /Yn)] , (36) 522 490 (23) = −

10 10 523 491 the pixel values are calculated as L L , M M and b∗ 200 [f (Y /Yn) f (Z/Zn)] , (37) 10 = = = − 492 S S . Finally, the conversion of data from LMS into RGB = t1/3 if t > σ 3 493 is given by the following equation f (t) (38) 524 =  2 R 4.4679 3.5873 0.1193 L t/(3σ ) 16/116 otherwise, −  + 494 G 1.2186 2.3809 0.1624 M .       where t can be X/Xn, Y /Yn or Z/Zn, and σ 6/29. 525 = − −  = B 0.0497 0.2439 1.2045 S For the inverse transformation, three intermediate variables 526    −    are required, fY , fX and fZ , as shown in Eqs. (39)-(41), 527 495      (24)

fY (L∗ 16)/166, (39) 528 B. CIELUV AND CIELAB COLOR SPACES = + 496 IEEE Proof 497 The color space CIELUV is obtained from CIEXYZ using fX fY (a∗/500), (40) 529 = + 498 the following equations fZ fY (b∗/200). (41) 530 4X = − 499 u′ , (25) = X 15Y 3Z Finally, Eqs. (42)–(44) are used to obtain the inverse trans- 531 + + 9Y formation, 532 500 v′ . (26) = X 15Y 3Z X f 3 if f > σ + + n X X X (42) 533 501 It is necessary to calculate the values un′ and vn′ , which  = fX 16/116 otherwise, 502 are the chromatic components of the reference white. In this  −

VOLUME 5, 2017  9 J. Cepeda-Negrete et al.: Dark Image Enhancement Using Perceptual Color Transfer

3 YnfY if fY > σ [20] V. Goffaux, C. Jacques, A. Mouraux, A. Oliva, P. Schyns, and B. Rossion, 599 534 Y (43) ‘‘Diagnostic colours contribute to the early stages of scene categorization: 600  = fY 16/116 otherwise, Behavioural and neurophysiological evidence,’’ Vis. Cognit., vol. 12, no. 6, 601  − pp. 878–892, 2005. 602 [21] A. Toet and J. Walraven, ‘‘New false color mapping for image fusion,’’ 603  3 ZnfZ if fZ > σ Opt. Eng., vol. 35, no. 3, pp. 650–659, 1996. 604 535 Z (44) =  [22] A. Toet, ‘‘Natural colour mapping for multiband nightvision imagery,’’ Inf. 605 fZ 16/116 otherwise. Fusion, vol. 4, no. 3, pp. 155–166, 2003. 606  − [23] A. Toet and E. M. Franken, ‘‘Perceptual evaluation of different image 607 fusion schemes,’’ Displays, vol. 24, no. 1, pp. 25–37, 2003. 608 536 REFERENCES  [24] Y.HaCohen, E. Shechtman, D. B. Goldman, and D. Lischinski, ‘‘Non-rigid 609 537 [1] D.-H. Kim and E.-Y. Cha, ‘‘Intensity surface stretching technique for con- dense correspondence with applications for image enhancement,’’ ACM 610 538 trast enhancement of digital photography,’’ Multidimensional Syst. Signal Trans. Graph., vol. 30, no. 4, pp. 70:1–70:10, 2011. 611 539 Process., vol. 20, no. 1, pp. 81–95, 2009. [25] A. Levin, D. Lischinski, and Y. Weiss, ‘‘Colorization using optimization,’’ 612 540 [2] R. Firoz, M. S. Ali, M. N. U. Khan, M. K. Hossain, M. K. Islam, and ACM Trans. Graph., vol. 23, no. 3, pp. 689–694, 2004. 613 541 M. Shahinuzzaman, ‘‘Medical image enhancement using morphological [26] D. Cohen-Or, O. Sorkine, R. Gal, T. Leyvand, and Y.-Q. Xu, ‘‘Color 614 542 transformation,’’ J. Data Anal. Inf. Process., vol. 4, no. 1, p. 1, 2016. harmonization,’’ ACM Trans. Graph., vol. 25, no. 3, pp. 624–630, 2006. 615 543 [3] G. Ginesu, D. D. Giusto, V. Margner, and P. Meinlschmidt, ‘‘Detection of [27] C. Sauvaget, J.-N. Vittaut, J. Suarez, V. Boyer, and S. Manuel, ‘‘Automated 616 544 foreign bodies in food by thermal image processing,’’ IEEE Trans. Ind. colorization of segmented images based on color harmony,’’ J. Multimedia 617 545 Electron., vol. 51, no. 2, pp. 480–490, Apr. 2004. Process. Technol., vol. 1, no. 4, pp. 228–244, 2010. 618 546 [4] X. Xie and K.-M. Lam, ‘‘Face recognition under varying illumination [28] E. Reinhard, M. Adhikhmin, B. Gooch, and P. Shirley, ‘‘Color transfer 619 547 based on a 2D face shape model,’’ Pattern Recognit., vol. 38, no. 2, between images,’’ IEEE Comput. Graph. Appl., vol. 21, no. 5, pp. 34–41, 620 548 pp. 221–230, 2005. Sep./Oct. 2001. 621 549 [5] L. Havasi, Z. Szlavik, and T. Sziranyi, ‘‘Detection of gait characteristics [29] X. Xiao and L. Ma, ‘‘Color transfer in correlated color space,’’ in Proc. 622 550 for scene registration in video surveillance system,’’ IEEE Trans. Image ACM Int. Conf. Virtual Reality Continuum Appl., 2006, pp. 305–309. 623 551 Process., vol. 16, no. 2, pp. 503–510, Feb. 2007. [30] A. Abadpour and S. Kasaei, ‘‘An efficient pca-based color transfer 624 552 [6] A. R. Rivera, B. Ryu, and O. Chae, ‘‘Content-aware dark image enhance- method,’’ J. Vis. Commun. Image Represent., vol. 18, no. 1, pp. 15–34, 625 553 ment through channel division,’’ IEEE Trans. Image Process., vol. 21, 2007. 626 554 no. 9, pp. 3967–3980, Sep. 2012. [31] C. R. Senanayake and D. C. Alexander, ‘‘Colour transfer by feature based 627 555 [7] M. T. Sampson, ‘‘An assessment of the impact of fused monochrome and histogram registration,’’ in Proc. Brit. Mach. Vis. Conf., 2007, pp. 1–10. 628 556 fused color night vision displays on reaction time and accuracy in target [32] T. Pouli and E. Reinhard, ‘‘Progressive color transfer for images of arbi- 629 AQ:3 557 detection,’’ Defense Tech. Inf. Center, Tech. Rep., 1996. trary dynamic range,’’ Comput. Graph., vol. 35, no. 1, pp. 67–80, 2011. 630 558 [8] M. A. Hogervorst and A. Toet, ‘‘Fast natural color mapping for night-time [33] N. Papadakis, E. Provenzi, and V. Caselles, ‘‘A variational model for 631 559 imagery,’’ Inf. Fusion, vol. 11, no. 2, pp. 69–77, 2010. histogram transfer of color images,’’ IEEE Trans. Image Process., vol. 20, 632 560 [9] Y. Rao, W. Lin, and L. Chen, ‘‘Image-based fusion for video no. 6, pp. 1682–1695, Jun. 2011. 633 561 enhancement of night-time surveillance,’’ Opt. Eng., vol. 49, no. 12, [34] S. Liu, H. Sun, and X. Zhang, ‘‘Selective color transferring via ellipsoid 634 562 p. 120501-1–120501-3, 2010. color mixture map,’’ J. Vis. Commun. Image Represent., vol. 23, no. 1, 635 563 [10] Y. Rao, W. Lin, and L. Chen, ‘‘Global motion estimation-based pp. 173–181, 2012. 636 564 method for nighttime video enhancement,’’ Opt. Eng., vol. 50, no. 5, [35] X. Qian, Y. Wang, and B. Wang, ‘‘Effective contrast enhancement method 637 565 p. 057203-1–057203-6, 2011. for color night vision,’’ Infr. Phys. Technol., vol. 55, no. 1, pp. 130–136, 638 566 [11] Y.-T. Kim, ‘‘Contrast enhancement using brightness preserving 2012. 639 567 bi-histogram equalization,’’ IEEE Trans. Consum. Electron., vol. 43, [36] X. Qian, L. Han, Y. Wang, and B. Wang, ‘‘Color contrast enhancement for 640 568 no. 1, pp. 1–8, Feb. 1997. color night vision based on color mapping,’’ Infr. Phys. Technol., vol. 57, 641 569 [12] E. D. Pisano et al., ‘‘Contrast limited adaptive histogram equalization pp. 36–41, Mar. 2013. 642 570 image processing to improve the detection of simulated spiculations [37] A. Toet and M. A. Hogervorst, ‘‘Progress in color night vision,’’ Opt. Eng., 643 571 in dense mammograms,’’ J. Digit. Imag., vol. 11, no. 4, pp. 193–200, vol. 51, no. 1, p. 010901, 2012. 644 572 1998. [38] E. A. Essock, M. J. Sinai, J. S. McCarley, W. K. Krebs, and J. K. DeFord, 645 573 [13] E. F. Arriaga-Garcia, R. E. Sanchez-Yanez, J. Ruiz-Pinales, and ‘‘Perceptual ability with real-world nighttime scenes: Image-intensified, 646 574 M. de Guadalupe Garcia-Hernandez, ‘‘Adaptive sigmoid function bihis- infrared, and fused-color imagery,’’ Hum. Factors, vol. 41, no. 3, 647 575 togram equalization for image contrast enhancement,’’ J. Electron. Imag., pp. 438–452, 1999. 648 576 vol. 24, no. 5, p. 053009, 2015. [39] E. Reinhard and T. Pouli, ‘‘Colour spaces for colour transfer,’’ in Compu- 649 577 [14] T. Arici, S. Dikbas, and Y. Altunbasak, ‘‘A histogram modification frame- tational Color Imaging. Springer, 2011, pp. 1–15. 650 AQ:4 578 work and its application for image contrast enhancement,’’ IEEE Trans. [40] B. Jiang et al., ‘‘Nighttime image dehazing with modified models of color 651 579 Image Process., vol. 18, no. 9, pp. 1921–1935, Sep. 2009. transfer and guided image filter,’’ in Multimedia Tools and Applications, 652 580 [15] N. Xu, W. Lin, Y. Zhou, Y. Chen, Z. Chen, and H. Li, ‘‘A new global-based 2017. 653 AQ:5 581 video enhancement algorithm by fusing features of multiple region-of- [41] Y. Zheng and E. A. Essock, ‘‘A local-coloring method for night-vision 654 582 interests,’’ in Proc. IEEE Vis. Commun. Image Process. (VCIP), Nov. 2011, colorization utilizing image analysis and fusion,’’ Inf. Fusion, vol. 9, no. 2, 655 583 pp. 1–4. pp. 186–199, 2008. 656 584 [16] Y. Chen, W. Lin, C. Zhang, Z. Chen, N. Xu, and J. Xie, ‘‘Intra-and-inter- [42] S. Yin, L. Cao, Y. Ling, and G. Jin, ‘‘One color contrast enhanced infrared 657 585 constraint-based video enhancement based on piecewise tone mapping,’’ and visible image fusion method,’’ Infr. Phys. Technol., vol. 53, no. 2, 658 586 IEEE Trans. Circuits Syst. Video Technol., vol. 23, no. 1, pp. 74–82, pp. 146–150, 2010. 659 587 Jan. 2013. IEEE [43]Proof Y.-J. Wang, H.-K. Qiu, L.-L. Wang, and X.-B. Shi, ‘‘Research on algo- 660 588 [17] A. Gijsenij and T. Gevers, ‘‘Color constancy using natural image statistics rithm of night vision image fusion and coloration,’’ J. China Univ. Posts 661 589 and scene semantics,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, Telecommun., vol. 20, pp. 20–24, Aug. 2013. 662 590 no. 4, pp. 687–698, Apr. 2011. [44] M. Fairchild, ‘‘Refinement of the RLAB color space,’’ Color Res. Appl., 663 591 [18] E. Provenzi, C. Gatta, M. Fierro, and A. Rizzi, ‘‘A spatially variant white- vol. 21, pp. 338–346, Jun. 1996. 664 592 patch and gray-world method for color image enhancement driven by [45] J. Yin and J. R. Cooperstock, ‘‘Color correction methods with applica- 665 593 local contrast,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 10, tions to digital projection environments,’’ J. WSCG, vol. 12, nos. 1–3, 666 594 pp. 1757–1770, Oct. 2008. pp. 499–506, 2004. 667 595 [19] J. Cepeda-Negrete and R. E. Sanchez-Yanez, ‘‘Automatic selection [46] Y. Xiang, B. Zou, and H. Li, ‘‘Selective color transfer with multi-source 668 596 of color constancy algorithms for dark image enhancement by images,’’ Pattern Recognit. Lett., vol. 30, no. 7, pp. 682–689, 2009. 669 597 fuzzy rule-based reasoning,’’ Appl. Soft Comput., vol. 28, pp. 1–10, [47] J. Schanda, Ed., Colorimetry: Understanding the CIE System. Hoboken, 670 598 Mar. 2015. NJ, USA: Wiley, 2007. 671

10 VOLUME 5, 2017 J. Cepeda-Negrete et al.: Dark Image Enhancement Using Perceptual Color Transfer

672 [48] M. Stokes, M. Anderson, S. Chandrasekar, and R. Motta, ‘‘A standard [55] K. Tsirikolias, ‘‘Low level image processing and analysis using radius 693 673 default color space for the Internet—sRGB,’’ Hewlett-Packard, Microsoft, filters,’’ Digit. Signal Process., vol. 50, pp. 72–83, Mar. 2016. 694 AQ:6 674 Palo Alto, CA, USA, Tech. Rep., 1996. [56] D. L. Ruderman, T. W. Cronin, and C.-C. Chiao, ‘‘Statistics of cone 695 675 [49] Y. Rubner, C. Tomasi, and L. J. Guibas, ‘‘The earth mover’s distance as a responses to natural images: Implications for visual coding,’’ J. Opt. Soc. 696 676 metric for image retrieval,’’ Int. J. Comput. Vis., vol. 40, no. 2, pp. 99–121, Amer. A, Opt. Image Sci., vol. 15, no. 8, pp. 2036–2045, 1998. 697 677 Nov. 2000. 678 [50] A. Bhattachayya, ‘‘On a measure of divergence between two statistical JONATHAN CEPEDA-NEGRETE, photograph and biography not available 698 AQ:8 679 populations defined by their probability distribution,’’ Bull. Calcutta Math. 699 AQ:7 680 Soc., vol. 35, pp. 99–109, 1943. at the time of publication. 681 [51] D. Martin, C. Fowlkes, D. Tal, and J. Malik, ‘‘A database of 682 human segmented natural images and its application to evaluat- RAUL E. SANCHEZ-YANEZ (M’–), photograph and biography not available 700 AQ:9 683 ing segmentation algorithms and measuring ecological statistics,’’ in at the time of publication. 701 684 Proc. 8th IEEE Int. Conf. Comput. Vis. (ICCV), vol. 2. Jul. 2001, 685 pp. 416–423. 686 [52] W. B. Thompson, P. Shirley, and J. A. Ferwerda, ‘‘A spatial post-processing FERNANDO E. CORREA-TOME, photograph and biography not available 702 687 algorithm for images of night scenes,’’ J. Graph. Tools, vol. 7, no. 1, at the time of publication. 703 688 pp. 1–12, 2002. 689 [53] J. Cepeda-Negrete. Personal Website. Accessed: Sep. 21, 2017. [Online]. 690 Available: https://jonathancn16.wixsite.com/profile ROCIO A. LIZARRAGA-MORALES (M’–), photograph and biography not 704 691 [54] J. Cepeda-Negrete. Academic Website. Accessed: Sep. 21, 2017. [Online]. available at the time of publication. 705 692 Available: https://www.diciva.ugto.mx/profesores/jonathancepeda.html 706

IEEE Proof

VOLUME 5, 2017 11 AUTHOR QUERIES AUTHOR PLEASE ANSWER ALL QUERIES

PLEASE NOTE: We cannot accept new source files as corrections for your paper. If possible, please annotate the PDF proof we have sent you with your corrections and upload it via the Author Gateway. Alternatively, you may send us your corrections in list format. You may also upload revised graphics via the Author Gateway.

AQ:1 = Please confirm the postal codes for ‘‘University of Guanajuato DICIVA and University of Guanajuato DICIS.’’ AQ:2 = Please note that there were discrepancies between the accepted pdf [dark-image-enhancement.pdf] and the [1BRAVO.tex] in the sentence on lines 349–352, 365–367, and 371–374. We have followed [1BRAVO.tex]. AQ:3 = Please provide the organization name, organization location, and report no. for ref. [7]. AQ:4 = Please provide the publisher location for ref. [39]. AQ:5 = Please provide the publisher name and publisher location for ref. [40]. AQ:6 = Please provide the report no. for ref. [48]. AQ:7 = Please provide the month for ref. [50]. AQ:8 = Please provide authors’ biographies. AQ:9 = Please provide the missing IEEE membership year for the authors ‘‘RAUL E. SANCHEZ-YANEZ and ROCIO A. LIZARRAGA-MORALES.’’

IEEE Proof