<<

Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15678

Color reproduction pipeline for an RGBW filter array sensor

WONSEOK CHOI,1,* HYUN SANG PARK,2 AND CHONG-MIN KYUNG1 1The School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea 2The Division of Electrical Engineering, Kongju National University, Cheonan, South Korea *[email protected]

Abstract: Many types of RGBW color filter array (CFA) have been proposed for various purposes. Most studies utilize intensity for improving the signal-to-noise ratio of the image and demosaicing the image, but we note that the white pixel intensity can also be utilized to improve color reproduction. In this paper, we propose a color reproduction pipeline for RGBW CFA sensors based on a fast, accurate, and hardware-friendly gray pixel detection using white pixel intensity. The proposed color reproduction pipeline was tested on a dataset captured from an OPA sensor which has RGBW CFA. Experimental results show that the proposed pipeline estimates the illumination more accurately and preserves the achromatic color better than conventional methods which do not use white pixel intensity.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction Many types of color filter arrays (CFA) have recently been proposed [1–5] to improve image quality. As one of these efforts, the white pixel, which is a transparent filter element, is included in CFA [6–8] shown in Fig.1. The wide spectral characteristics of the white pixel are utilized not only to enhance the brightness and reflectance in displays [9,10], but also to increase the signal-to-noise ratio of the in cameras [11]. In addition, the white pixel with integrated micro-aperture is used to obtain the color image and depth-related disparity simultaneously in a single-lens imaging system [12–14]. We noted that white can also be used to improve image color reproduction. The conventional color reproduction pipeline consists of two steps: white balance and color correction. White balance is motivated by the color constancy ability of the human visual system to perceive the object color as approximately constant regardless of the color of the source [15]. The goal of the computational color constancy is eliminating the color cast triggered by incident light. In order to render the image under the white light source regardless of the incident illumination, most color constancy methods first estimate the scene illumination from the color-biased image captured from the image sensor. They then correct the image by applying the transform matrix calculated from the color of the estimated illuminant. Various approaches for illumination estimation are mainly divided into statistical methods [16–18] and learning-based methods [19–21]. Although learning-based approaches show remarkable performance, training data dependency [22], expensive hardware requirement, and slow running speed make it difficult to use in practical applications. Based on the hypothesis that most of the natural images contain detectable gray pixels, gray pixel detection approaches [23,24] estimate the illumination quickly and as accurately as the learning-based approach. However, since these methods use a local window to detect gray pixels, it is difficult to extract reliable gray pixels in uniform regions and boundaries of different color surfaces. In addition, the gray index sorting algorithm, which is used in previous works to select the reliable top n% gray pixels for every frame, increases hardware cost.

#391253 https://doi.org/10.1364/OE.391253 Journal © 2020 Received 27 Feb 2020; revised 28 Apr 2020; accepted 28 Apr 2020; published 8 May 2020 Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15679

Fig. 1. Various color filter array (CFA) patterns. (a) Bayer CFA [5] which is widely used in digital cameras. (b) Sony RGBW CFA [6]. (c) Bayer-like RGBW CFA [8]. (d) Offset pixel aperture (OPA) RGBW CFA [14] whose white pixel is covered with the micro-aperture.

The objective of color correction is to transform the device-dependent into the device-independent color space. Since the spectral sensitivity of the image sensor is usually different from that of the desired color, color correction is essential step in the color reproduction. Among many proposed algorithms [25–29], least square regression approaches [30–32] are widely used due to their low computational complexity. White color preservation is one of the most important features of color reproduction. However, the color correction matrix obtained by least square regression causes the white color to change as it minimizes the colorimetric error of all calibration color. A constrained least square regression [33,34] is proposed with a constraint to map the selected white color exactly. Although the constrained least square regression performs well for a hypothetical set that contains all surface reflectance, the colorimetric error increases when the input image does not contain sufficient white reflectance. In this paper, we propose a color reproduction pipeline for RGBW sensors, where the limitations of white balance and color correction are resolved by using the white pixel intensity. The contributions of the proposed pipeline are as follows.

• We propose a fast, accurate, and hardware-friendly gray pixel detection algorithm for the RGBW sensors. Since the proposed algorithm does not use the local window, a reliable gray pixel can be detected even in the uniform region. Instead of using the computationally heavy sorting algorithm, the properties of the Skellam distribution are used to efficiently distinguish reliable gray pixels. To the best of our knowledge, the proposed algorithm is the first attempt to use white pixel intensity for illuminant estimation.

• We propose an adaptive white preserved color correction that tunes the color correction matrix according to the gray index of each pixel calculated from the gray pixel detection algorithm. The tuning of the color correction matrix is performed by a weighted summation of color correction matrices with the gray index as a weight factor. Such color correction matrix tuning preserves the achromatic color while keeping the chromatic color from deteriorating.

The rest of the paper is structured as follows. The proposed gray pixel detection algorithm is explained in Section2, the proposed color reproduction pipeline is described in Section3, experimental results are discussed in Section4, and the limitation and future work are discussed in Section5. The paper is concluded in Section6.

2. Gray pixel detection using the white pixel intensity The achromatic region provides an important cue to estimate illumination as it reflects the color of the incident light of the image. If the achromatic region is extracted from the color-biased image, the scene illumination can be accurately estimated. Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15680

Yang et al. verified the hypothesis that most of the natural images in the real world contain detectable gray pixels, which can be used for illuminant estimation [23,24]. Based on this hypothesis, previous works defined an illuminant-invariant measure (IIM) using a local contrast or local gradient. These methods may not be able to detect reliable gray pixels that are isolated or located within a uniform region as the IIM is calculated within the local window. To solve the problem, we defined IIM using white pixel intensity in the RGBW sensor. The mathematical derivation is as follows. The pixel intensity Ii(x, y) ∈ {IR, IG, IB, IW } at (x, y) can be represented as the dichromatic reflection model [35,36] as follows: ¹ ¹ Ii(x, y) = mb(x, y) E(λ)Si(λ)Rb(x, y, λ)dλ + ms(x, y) E(λ)Si(λ)Rs(x, y, λ)dλ, (1) where E(λ) is the illuminant spectral power distribution, Si(λ) is the spectral sensitivities of the sensor, Rb(x, y, λ) is the body reflectance, and Rs(x, y, λ) is the specular reflectance, mb(x, y) is the body geometric scale factor, ms(x, y) is the specular geometric scale factor, and λ is the wavelength. The goal of the white balance is to estimate E(λ) from Ii(x, y), and render the image under the white light source. However, this is a problem since the only thing that we can get from the image sensor is the pixel intensity that changes not only by the spectral distribution of the illuminant but also by the reflectance of the object. Thus, additional assumptions are needed to solve the problem. Although the pixel intensity depends on specular reflection, numerous illuminant estimation algorithms ignore it for simplicity based on the Lambertian reflectance model [37–39] as follows: ¹ Ii(x, y) = m(x, y) E(λ)Si(λ)R(x, y, λ)dλ, (2) where m(x, y) is the Lambertian , and R(x, y, λ) is the surface reflectance. In practice, it is impossible to estimate the continuous functions of the illuminant by using the pixel intensity which is integrated with the wavelength [18,40,41]. The von Kries coefficient law [42] is adopted in numerous literatures [43–46] to transform (2) into a simplified diagonal model as

Ii(x, y) = Ei(x, y)Ri(x, y), (3) where Ei(x, y) is the diagonal matrix of illumination and Ri(x, y) is the reflectance. In logarithmic space, the pixel intensity is expressed as the summation of the logarithms of the Ei(x, y) and Ri(x, y) as log Ii(x, y) = log(Ei(x, y)· Ri(x, y)) = log Ei(x, y) + log Ri(x, y). (4) Suppose the illuminant is uniform within R, G, B, and W pixels at the same position (x, y). Once 0 we estimate the white pixel intensity from R, G, and B pixels as IW (x, y), the difference between 0 log IW (x, y) and log IW (x, y) is independent of the illuminant Ei(x, y) in logarithmic space, which can be expressed as 0 0 ∆ log IW (x, y) = |log IW (x, y) − log I W (x, y)| = |log RW (x, y) − log R W (x, y)|. (5)

Since ∆ log IW (x, y) is independent of the illuminant, it can be used as an IIM as shown in Fig.2(e). Based on the spectral correlation among the R, G, B, and W pixel intensities, the estimated white 0 pixel intensity IW (x, y) is calculated under the assumption [47] that there is a linear relationship between R, G, B and W pixel intensities. The offset term is also included to compensate for the spectral mismatch of color filter array [48] as follows: 0 ˆ ˆ ˆ ˆ IW (x, y) = kRIR(x, y) + kGIG(x, y) + kBIB(x, y) + kO, (6) where kˆR, kˆG, kˆB, and kˆO are white estimation coefficients that were obtained by least square regression. Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15681

Fig. 2. The heat maps of the gray index extracted from various algorithms. (a) Input image of the Macbeth ColorChecker. (b) Ground truth of the gray index (c) Estimated gray index of color constancy using gray pixels [23]. (d) Estimated gray index of improved color constancy using gray pixels [24]. (e) Estimated gray index using ∆ log IW (x, y). (f) Estimated gray index of the proposed method.

In order to distinguish the reliable IIM at each pixel, the standard deviation of IIM is considered. It is known that the pixel intensity is determined by the number of photons which follows the laws of quantum physics [49]. As the probability distribution of the arrival of photons on a pixel follows the Poisson distribution, the probability distribution of ∆ log IW (x, y) is the probability distribution of the difference of two random variables which are the logarithm of the Poisson distribution. Since it is too complicated to calculate the standard deviation of the ∆ log IW (x, y), 0 the difference of IW (x, y) and IW (x, y) is designated as IIM under the assumption that ∆IW (x, y) also reasonably eliminates the effect of the illuminant. The IIM for estimating gray pixels is calculated as follows: 0 ∆IW (x, y) = IW (x, y) − IW (x, y). (7) Since the IIM is calculated at each pixel, it can be used in a uniform region, boundary region, single illuminant environment, and multi-illuminant environment. Then, the probability distribution of IIM follows the Skellam distribution [50,51] because the difference between two Poisson random variables is defined as the Skellam distribution, which can be expressed as

k  µ  2 √ −(µ1+µ2) 1  f (k; µ1, µ2) = e Ik 2 µ1 µ2 , (8) µ2 where µ1 and µ2 are the means of the two Poisson distributions which are the white pixel intensity and the estimated white pixel intensity. Ik(z) is the modified Bessel function of the first kind. The mean µs and the standard deviation σs of the Skellam distribution is given by

µs = µ1 − µ2, (9) √ σs = µ1 + µ2. (10) With the property of the Skellam distribution, a gray index GI(x, y) is defined to measure the grayness of each pixel as

 |∆IW (x,y)|  1 − if ∆IW (x, y) < cσs GI(x, y) =  cσs , (11)  0 otherwise  where c is a threshold coefficient. According to the level of the pixel intensity, the threshold of GI varies adaptively. The larger the GI, the higher the probability of being the gray pixel. To reduce noise and get a reliable GI, an average filter is applied to the GI as

∗ GI (x, y) = AF5{GI(x, y)}, (12)

∗ where AF5{·} is the 5 × 5 averaging filter. As shown in Fig.2(f), the proposed GI can robustly distinguish achromatic color patches and border lines of the Macbeth ColorChecker as gray Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15682

pixels. In addition, the proposed GI∗ is easily implemented in hardware with a 48 k NAND2 gate count and 9.5 kB SRAM as it does not require the sorting algorithm used in previous works [23,24] to select reliable top n% gray pixels for every frame.

3. Color reproduction with gray pixel 3.1. White balance using the gray pixel detection The color of the illuminant can be estimated from the GI∗ which has illuminant characteristics. According to the GI∗ definition, the GI∗ magnitude of the achromatic color is close to one. The estimated illuminant color ej is calculated as

1 Õ ∗ ej = Ij(x, y)GI (x, y), j ∈ {R, G, B}, (13) N x,y where N is the number of the non-zero GI∗. The white balance is performed for the pixel value in the input image by

WB eR + eG + eB Ij (x, y) = Ij(x, y) , j ∈ {R, G, B}, (14) 3ej WB where Ij is the white-balanced pixel value. 3.2. Adaptive color correction considering grayness For a 3 × 1 input vector c formed by R, G, and B sensor responses, the linear color correction transform is typically performed by t = Mc, (15) where t is a 3 × 1 vector of the known standard RGB values that we want to restore, and M is a 3 × 3 color correction matrix. Unlike typical cases, the elements of c are replaced with white-balanced pixel values, not sensor responses because the white balance is followed by color correction in our color reproduction pipeline. For a set of N training patches, we denote the 3 × N target color matrix and the 3 × N white-balanced color as T and C, respectively. The color correction matrix is generally calculated by least square regression as

2 Mb = arg min ||T − MC||F, (16) M where || · ||F is the Frobenius norm. Since the least square regression finds optimal Mb to minimize the colorimetric error for all target color, the white color error, which is important in color reproduction, may increase. The objective of the adaptive color correction considering grayness is to map the achromatic color with low colorimetric error using gray pixel detection, and to keep the chromatic color from deteriorating. A dedicated color correction matrix Mca for a sub-dataset containing only achromatic color patches is calculated by least square regression. The adaptive white preserved color correction matrix Mdwp is calculated as ∗ ∗ Mdwp = w · GI Mca + (1 − w · GI )Mb , (17) where w is the weighting factor of the gray index. Through the weighted color correction matrices, the white-preserved color correction is performed adaptively for each color. Moreover, the proposed adaptive white preserved color correction is able to utilize not only the linear color correction (LCC) but also other types of color correction such as the polynomial color correction (PCC) [30], the root polynomial color correction (RPCC) [31], and the 3 × 4 color correction matrix (3 × 4 CCM) [52]. More details regarding these types of color corrections can be found in Section 4.2. Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15683

4. Experimental results Conventional datasets such as the Gehler-Shi [53,54], the SFU [55,56], and the NUS [57] cannot be used to evaluate the proposed color reproduction pipeline since they do not have the white pixel intensity. Therefore, we captured a dataset using the full color OPA sensor [14] whose color filter array is RGBW. The unit pixel size of the OPA sensor is 2.8 × 2.8 µm2 and the OPA sensor was fabricated using a 0.11 µm CIS process. In the experiment, we used the raw OPA sensor images whose resolution is 1544 × 1100, and a lens system with 6 mm focal length and 1.4 F-number. The X-Rite Macbeth ColorChecker Classic and X-Rite ColorChecker White Balance ∗ were used to evaluate the performance. The color difference ∆Eab in CIELAB color space and the recovery angular error Erec are used as the evaluation metric. The recovery angular error Erec is defined as   −1 egt · eest Erec = cos , (18) ||egt ||||eest || 0 0 where · represents the vector dot product, egt is the normalized RGB value of the ground truth, and eest is the normalized RGB value of the estimated illuminant color.

4.1. White balance using the gray pixel detection The proposed white balance method is compared with Gray-World (GW) [16], Shadow of Gray (SoG) [17], first-order Gray-Edge (GE1), second-order Gray-Edge (GE2) [18], Weighted -Edge (WGE) [45], and color constancy using grey pixels (GP15 [23], GP18 [24]). In the proposed method, there is one variable parameter c, which determines the threshold of the standard deviation of the Skellam distribution to detect reliable gray pixels. As shown in Fig.3(a), the angular error was calculated by changing c in the dataset to determine the optimal parameter c. The proposed method shows stable performance when c is between 0.001 and 0.1. c = 0.001 was used in the following experiments because it shows the best performance. The parameter n of GP15 and GP18, which is the percentage of detected gray pixels, was determined to be 10% as the minimal angular error is obtained there as shown in Fig.3(b).

Fig. 3. The influence of the variable parameters on the white balance performance in our dataset. (a) Relationship between the angular error and the parameter c of the proposed method. (b) Relationship between the angular error and the parameter n% of the GP15.

First, we evaluated various white balance algorithms with achromatic color patches of the Macbeth ColorChecker under various illuminants produced by the X-Rite SpectraLight QC light booth. The results of this experiment are listed in Table1. The smallest average angular error for all illuminants, an error of 0.991◦, is obtained with the proposed method. Figure4 shows the visual results of each algorithm for achromatic color patches of the Macbeth ColorChecker. Next, we evaluated the performance in the OPA sensor image dataset, which consists of 42 images captured indoors such as classroom, hallway, bookshelf, stairs, and poster board, and 62 Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15684

Fig. 4. Visual results of each algorithm for achromatic color patches of the Macbeth ColorChecker. (a) The results under 6500 K illuminant. (b) The results under 4000 K illuminant. (c) The results under 3500 K illuminant. (d) The results under 2856 K illuminant.

Table 1. The angular errors of each algorithm for the gray patches of the Macbeth ColorCheck under various illuminations. Average for all of illuminant 6500 K 4000 K 3500 K 2856 K illuminants Method Mean Median Mean Median Mean Median Mean Median Mean Median Do nothing 6.936 6.890 4.609 4.523 4.335 4.232 3.217 3.487 4.775 4.783 GW 1.372 1.359 1.466 1.356 1.463 1.336 1.159 0.991 1.365 1.260 SoG 1.059 0.993 1.287 1.168 1.280 1.144 1.123 0.957 1.187 1.066 GE1 1.508 1.466 1.628 1.510 1.560 1.422 1.235 1.051 1.483 1.362 GE2 2.254 2.221 2.242 2.134 2.070 1.938 1.478 1.556 2.011 1.962 WGE 1.364 1.293 1.294 1.140 1.247 1.072 1.182 0.972 1.272 1.119

GP15 0.958 0.848 1.356 1.249 1.420 1.294 1.708 1.516 1.361 1.227

GP18 0.946 0.817 1.450 1.345 1.420 1.295 1.769 1.574 1.396 1.258 Proposed method 0.947 0.847 0.854 0.628 0.921 0.735 1.241 1.182 0.991 0.848 images captured with various objects such as office supplies, cups, dolls, and miniatures under 4 different illuminant sources (6500 K, 4000 K, 3500K, and 2856 K), as shown in Fig.5. For the estimation of the actual illuminant color of the scene, every image contains the white board which is masked out when the illuminant estimation is in processing. The exact position of the white board was manually labeled. The results of the proposed method and other methods are listed in Table2. For both the mean angular error and the mean color difference, the proposed method outperforms other methods. Compared with the second-best methods, the mean angular error and the mean color difference of the proposed method are 20.6% and 27.7% lower, respectively. Figure6 shows the visual results of each method applied to the image of the dataset.

4.2. Adaptive color correction considering grayness The proposed adaptive color correction considering grayness method is compared with the white-point preserving least square (WPPLS) regression method [34] which shares the same goal of preserving white. Moreover, since the proposed method is able to be applied to LCC, PCC [30], RPCC [31], and 3 × 4 CCM [52], we compared the results with and without the proposed method applied. WPPLS finds the optimal color correction matrix with the constraint that a particular surface reflectance is mapped without error. In this experiment, the 19th patch of the Macbeth Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15685

Fig. 5. Examples of the of the dataset captured from OPA sensor [14] which has RGBW CFA.

Fig. 6. Visual results of each method applied to the image of the dataset. The angular error is displayed at the bottom left of each image.

Table 2. Performance of various method and the proposed method on the OPA sensor image dataset. Metric Angular error Color difference Method Mean Median Mean Median Do nothing 4.226 4.247 13.59 12.94 GW 1.719 1.440 4.171 3.056 GE1 2.361 2.296 6.300 4.639 GE2 2.751 2.614 7.527 6.226 WGE 2.335 2.159 4.637 3.959 SoG 1.854 1.841 6.325 4.479

GP15 1.653 1.218 4.273 1.947

GP18 1.600 1.242 4.264 2.170 Proposed method 1.270 0.988 3.015 2.347 Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15686

ColorChecker is used as the constraint surface reflectance of WPPLS. The color correction matrices of WPPLS and the proposed method are calculated based on the LCC. The angular errors are compared among the LCC, WPPLS, and the proposed method in Fig.7. Although the angular error of WPPLS was the smallest for the 9th, 18th to 20th patches, the angular error of the proposed algorithm was the smallest for most of the other patches, including achromatic color patches.

Fig. 7. Comparison of the mean angular error for each patch of the Macbeth ColorChecker.

The results for achromatic color patches (19th to 24th) and chromatic color patches (1st to 18th) of each method are summarized in Table3. The mean and median angular errors of LCC with the proposed method are the lowest in all cases. As the white preservation methods focus on achromatic color preservation, which is important in color reproduction, they increase chromatic color errors by definition [34]. Given the importance of the white, the results of the color difference show that the proposed method preserves the achromatic color and prevents the chromatic color degradation. The visual comparison of the color correction results for the Macbeth ColorChecker is shown in Fig.8.

Fig. 8. Visual results of color correction for each patch in Macbeth ColorChecker. The average angular error of the proposed method is the lowest compared with other methods.

In addition, we evaluated the performance of the LCC, PCC, RPCC, and 3 × 4 CCM with and without the proposed adaptive white preserving approach. The results for achromatic color patches and chromatic color patches are listed in Table4. For achromatic color patches, the angular error and the color difference of the proposed method are lower in all cases than those using other methods. In summary, we can obtain the lowest errors for achromatic with the proposed method, while keeping the chromatic colors from deteriorating. Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15687

Table 3. The results of the angular error and the for achromatic color patches and chromatic color patches in the Macbeth ColorChecker. Metric Angular error Color difference Patch type Achromatic patches Chromatic patches Achromatic patches Chromatic patches Method Mean Median Mean Median Mean Median Mean Median LCC 1.824 1.633 3.664 3.016 13.96 13.54 13.39 12.76 LCC + WPPLS 2.252 2.261 4.569 4.490 14.44 13.04 29.18 28.65 LCC + Prop. 1.409 1.262 3.496 2.639 13.60 13.29 14.05 12.88

Table 4. The results of the color correction methods with and without the proposed adaptive white preserving color correction. Metric Angular error Color difference Patch type Achromatic patches Chromatic patches Achromatic patches Chromatic patches Method Mean Median Mean Median Mean Median Mean Median LCC 1.824 1.633 3.664 3.016 13.96 13.54 13.39 12.76 LCC + Prop. 1.409 1.262 3.496 2.639 13.60 13.29 14.05 12.88 PCC,2 1.727 1.243 2.901 2.336 8.802 9.142 6.021 5.090 PCC,2 + Prop. 1.017 0.732 2.913 2.388 8.209 7.508 7.905 6.580 PCC,3 1.417 1.179 2.114 1.472 4.745 4.103 3.678 3.002 PCC,3 + Prop. 1.096 0.911 2.174 1.515 4.370 3.493 4.413 3.458 PCC,4 1.177 1.166 1.921 1.320 2.977 2.945 2.534 1.691 PCC,4 + Prop. 1.126 1.114 1.932 1.343 2.877 2.801 2.690 1.986 RPCC,2 2.160 1.779 3.241 2.412 14.47 15.53 10.14 10.39 RPCC,2 + Prop. 1.987 1.666 3.259 2.366 14.25 15.29 10.24 10.20 3 × 4 CCM 0.982 0.998 3.835 1.973 6.535 6.650 10.32 10.32 3 × 4 CCM + Prop. 0.797 0.815 3.954 2.290 6.041 6.216 11.02 10.10

5. Limitation and future work The proposed white balance performs based on the validated hypothesis [23] that most of the natural images in the real world contain detectable gray pixels. As an extreme case, if there are no gray regions or there are no reliable gray pixels in the image, the illuminant estimation accuracy will be worse. To model this situation, the test images captured on the solid background were used for evaluation. The results of various algorithms are compared in Fig.9. Since the white board was reflected on the floor, the right side of the images, including the white board and its reflection, was masked out during the illuminant estimation. As compared to the angular error

Fig. 9. Results of each white balance method on images captured on the red solid background. The angular error is displayed at the bottom left of each image. Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15688

for the dataset images shown in Fig.6, the angular error of each algorithm is quite increased as shown in Fig.9. Despite challenging conditions, the angle error of the proposed method was the lowest among the competitors. The OPA sensor dataset used in the experiment was taken under the single illumination condition. The illuminant estimation using the proposed method under the multiple-illumination condition is a subject of future work.

6. Conclusion In this paper, we have proposed a color reproduction pipeline for RGBW sensors based on newly proposed gray pixel detection using white pixel intensity. A fast, accurate, and hardware-friendly illuminant-invariant measure for gray pixel detection was developed using the white pixel intensity. The proposed gray pixel detection algorithm can detect reliable gray pixels even in a uniform region, and be implemented in hardware with a 48 k NAND2 gate count and 9.5 kB SRAM. To the best of our knowledge, the proposed white balance method is the first attempt to use white pixel intensity for illuminant estimation. Experimental results show that the proposed pipeline obtains better results in terms of illumination estimation accuracy and achromatic color preservation than the compared methods.

Funding Center for Integrated Smart Sensors funded by the Ministry of Science and ICT as Global Frontier Project (CISS-2013M3A6A6073718); Samsung (G01180228).

Disclosures The authors declare no conflicts of interest.

References 1. R. H. Kröger, “Anti-aliasing in image recording and display hardware: lessons from nature,” J. Opt. A: Pure Appl. Opt. 6(8), 743–748 (2004). 2. R. Lukac and K. N. Plataniotis, “Color filter arrays: Design and performance analysis,” IEEE Trans. Consumer Electronics 51(4), 1260–1267 (2005). 3. Y. Monno, S. Kikuchi, M. Tanaka, and M. Okutomi, “A practical one-shot multispectral imaging system using a single image sensor,” IEEE Trans. Image Process. 24(10), 3048–3059 (2015). 4. J. Couillaud, A. Horé, and D. Ziou, “Nature-inspired color-filter array for enhancing the quality of images,” J. Opt. Soc. Am. A 29(8), 1580–1587 (2012). 5. B. E. Bayer, “Color imaging array,” U.S. patent 3,971,065 (1976). 6. I. Hirota, “Solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus,” U.S. patent 8,436,925 (2013). 7. J. T. Compton and J. F. Hamilton Jr, “Image sensor with improved light sensitivity,” U.S. patent 8,139,130 (2005). 8. H. Honda, Y. Iida, G. Itoh, Y. Egawa, and H. Seki, “A novel Bayer-like WRGB color filter array for CMOS image sensors,” in Human Vision and Electronic Imaging XII (2007), p. 64921J. 9. Y. Kwak, J. Park, and D.-S. Park, “Generating vivid colors on red---white electonic-paper display,” Appl. Opt. 47(25), 4491–4500 (2008). 10. Y. Xiong, L. Wang, W. Xu, J. Zou, H. Wu, Y. Xu, J. Peng, J. Wang, Y. Cao, and G. Yu, “Performance analysis of PLED based flat panel display with RGBW sub-pixel layout,” Org. Electron. 10(5), 857–862 (2009). 11. S. Jee, K. Song, and M. Kang, “Sensitivity and resolution improvement in RGBW color filter array sensor,” Sensors 18(5), 1647 (2018). 12. B.-S. Choi, S.-H. Kim, J. Lee, D. Seong, J.-K. Shin, S. Chang, J. Park, and S.-J. Lee, “CMOS image sensor for extracting depth information using pixel aperture technique,” in 2018 IEEE International Instrumentation and Measurement Technology Conference (2018), pp. 1–5. 13. J. Lee, B.-S. Choi, S.-H. Kim, J. Lee, J. Lee, S. Chang, J. Park, S.-J. Lee, and J.-K. Shin, “Effects of Offset Pixel Aperture Width on the Performances of CMOS Image Sensors for Depth Extraction,” Sensors 19(8), 1823 (2019). 14. B.-S. Choi, J. Lee, S.-H. Kim, S. Chang, J. Park, S.-J. Lee, and J.-K. Shin, “Analysis of Disparity Information for Depth Extraction Using CMOS Image Sensor with Offset Pixel Aperture Technique,” Sensors 19(3), 472 (2019). Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15689

15. L. T. Maloney and B. A. Wandell, “Color constancy: a method for recovering surface spectral reflectance,” J. Opt. Soc. Am. A 3(1), 29–33 (1986). 16. G. Buchsbaum, “A spatial processor model for object colour ,” J. Franklin Inst. 310(1), 1–26 (1980). 17. G. D. Finlayson and E. Trezzi, “ and colour constancy,” in Color and Imaging Conference (2004), pp. 37–41. 18. J. Van De Weijer, T. Gevers, and A. Gijsenij, “Edge-based color constancy,” IEEE Trans. Image Process. 16(9), 2207–2214 (2007). 19. H. R. V. Joze and M. S. Drew, “Exemplar-based color constancy and multiple illumination,” IEEE Trans. Pattern Anal. Mach. Intell. 36(5), 860–873 (2014). 20. A. Gijsenij and T. Gevers, “Color constancy using natural image statistics and scene semantics,” IEEE Trans. Pattern Anal. Mach. Intell. 33(4), 687–698 (2011). 21. A. Gijsenij, T. Gevers, and J. Van De Weijer, “Generalized mapping using image derivative structures for color constancy,” Int. J. Comput. Vis. 86(2-3), 127–139 (2010). 22. S.-B. Gao, M. Zhang, C.-Y. Li, and Y.-J. Li, “Improving color constancy by discounting the variation of camera spectral sensitivity,” J. Opt. Soc. Am. A 34(8), 1448–1462 (2017). 23. K.-F. Yang, S.-B. Gao, and Y.-J. Li, “Efficient illuminant estimation for color constancy using grey pixels,” in Proceedings of the IEEE Conference on and Pattern Recognition (2015), pp. 2254–2263. 24. X. Yang, X. Jin, and J. Zhang, “Improved single-illumination estimation accuracy via redefining the illuminant- invariant descriptor and the grey pixels,” Opt. Express 26(22), 29055–29067 (2018). 25. P.-C. Hung, “Colorimetric calibration in electronic imaging devices using a look-up-table model and interpolations,” J. Electron. Imaging 2(1), 53–62 (1993). 26. J. S. McElvain and W. Gish, “Camera color correction using two-dimensional transforms,” in Color and Imaging Conference (2013), pp. 250–256. 27. P.-C. Hung, “Color rendition using three-dimensional interpolation,” Imaging Applications in the Work World 0900, 111–115 (1988). 28. H. R. Kang and P. G. Anderson, “Neural network applications to the color scanner and printer calibrations,”J. Electron. Imaging 1(2), 125–136 (1992). 29. V. Cheung, S. Westland, D. Connah, and C. Ripamonti, “A comparative study of the characterisation of colour cameras by means of neural networks and polynomial transforms,” Color. Technol. 120(1), 19–25 (2004). 30. G. Hong, M. R. Luo, and P. A. Rhodes, “A study of digital camera colorimetric characterization based on polynomial modeling,” Color Res. Appl. 26(1), 76–84 (2001). 31. G. D. Finlayson, M. Mackiewicz, and A. Hurlbert, “Color correction using root-polynomial regression,” IEEE Trans. Image Process. 24(5), 1460–1470 (2015). 32. S. Lim and A. Silverstein, “Spatially varying color correction (SVCC) matrices for reduced noise,” in Color and Imaging Conference (2004), pp. 76–81. 33. G. D. Finlayson and M. S. Drew, “White-point preserving color correction,” in Color and Imaging Conference (1997), pp. 258–261. 34. G. D. Finlayson and M. S. Drew, “Constrained least-squares regression in color spaces,” J. Electron. Imaging 6(4), 484–494 (1997). 35. S. A. Shafer, “Using color to separate reflection components,” Color Res. Appl. 10(4), 210–218 (1985). 36. G. J. Klinker, S. A. Shafer, and T. Kanade, “A physical approach to color image understanding,” Int. J. Comput. Vis. 4(1), 7–38 (1990). 37. M. Oren and S. K. Nayar, “Generalization of Lambert’s reflectance model,” in Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques (1994), pp. 239–246. 38. W. Shi, C. C. Loy, and X. Tang, “Deep specialized network for illuminant estimation,” in European Conference on Computer Vision (2016), pp. 371–387. 39. G. D. Finlayson, S. D. Hordley, and P. M. Hubel, “Color by correlation: A simple, unifying framework for color constancy,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1209–1221 (2001). 40. A. Gijsenij, T. Gevers, and J. Van De Weijer, “Computational color constancy: Survey and experiments,” IEEE Trans. Image Process. 20(9), 2475–2489 (2011). 41. G. D. Finlayson and R. Zakizadeh, “Reproduction angular error: An improved performance metric for illuminant estimation,” in Proceedings of the British Machine Vision Conference (2014), pp. 1–11. 42. H. Y. Chong, S. J. Gortler, and T. Zickler, “The von Kries hypothesis and a basis for color constancy,” in 2007 IEEE 11th International Conference on Computer Vision (2007), pp. 1–8. 43. G. D. Finlayson, M. S. Drew, and B. V. Funt, “Color constancy: generalized diagonal transforms suffice,” J. Opt. Soc. Am. A 11(11), 3011–3019 (1994). 44. S. D. Hordley, “Scene illuminant estimation: past, present, and future,” Color Res. Appl. 31(4), 303–314 (2006). 45. A. Gijsenij, T. Gevers, and J. Van De Weijer, “Improving color constancy by photometric edge weighting,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 918–929 (2012). 46. S. Kawada, R. Kuroda, and S. Sugawa, “Color reproductivity improvement with additional virtual color filters for WRGB image sensor,” in Color Imaging XVIII (2013), pp. 1–7. 47. C. Park, K. Song, and M. Kang, “G-channel restoration for RWB CFA with double-exposed W channel,” Sensors 17(2), 293 (2017). Research Article Vol. 28, No. 10 / 11 May 2020 / Optics Express 15690

48. P.-H. Su, P.-C. Chen, and H. H. Chen, “Compensation of spectral mismatch to enhance WRGB demosaicking,” in IEEE International Conference on Image Processing (2015), pp. 68–72. 49. L. J. Van Vliet, I. T. Young, and J. J. Gerbrands, Fundamentals of image processing (Delft University of Technology, 1998). 50. J. G. Skellam, “The frequency distribution of the difference between two Poisson variates belonging to different populations,” J. Roy. Statist. Soc. (N. S. 109(3), 296 (1946). 51. Y. Hwang, J.-S. Kim, and I.-S. Kweon, “Sensor noise modeling using the Skellam distribution: Application to the color edge detection,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition (2007), pp. 1–8. 52. J. Vaillant, A. Clouet, and D. Alleysson, “Color correction matrix for sparse RGB-W image sensor without IR cutoff filter,” in Unconventional Optical Imaging (2018), p. 1067704. 53. P. V. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp, “Bayesian color constancy revisited,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition (2008), pp. 1–8. 54. L. Shi and B. Funt, “Re-processed version of the gehler color constancy dataset of 568 images,” Accessed from http://www.cs.sfu.ca/∼colour/data/ (2010). 55. K. Barnard, L. Martin, B. Funt, and A. Coath, “A data set for color research,” Color Res. Appl. 27(3), 147–151 (2002). 56. F. Ciurea and B. Funt, “A large image database for color constancy research,” in Color and Imaging Conference (2003), pp. 160–164. 57. D. Cheng, D. K. Prasad, and M. S. , “Illuminant estimation for color constancy: why spatial-domain methods work and the role of the color distribution,” J. Opt. Soc. Am. A 31(5), 1049–1058 (2014).