<<

sensors

Article An Acquisition Method for Visible and Near Infrared Images from Single CMYG Filter Array-Based Sensor

Younghyeon Park and Byeungwoo Jeon * Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Korea; [email protected] * Correspondence: [email protected]

 Received: 9 September 2020; Accepted: 26 September 2020; Published: 29 September 2020 

Abstract: Near-infrared (NIR) images are very useful in many image processing applications, including banknote recognition, vein detection, and surveillance, to name a few. To acquire the NIR image together with visible range signals, an imaging device should be able to simultaneously capture NIR and visible range images. An implementation of such a system having separate sensors for NIR and visible has practical shortcomings due to its size and hardware cost. To overcome this, a single sensor-based acquisition method is investigated in this paper. The proposed imaging system is equipped with a conventional color filter array of , , , and , and achieves signal separation by applying a proposed separation matrix which is derived by mathematical modeling of the signal acquisition structure. The elements of the separation matrix are calculated through conversion and experimental data. Subsequently, an additional denoising process is implemented to enhance the quality of the separated images. Experimental results show that the proposed method successfully separates the acquired mixed image of visible and near-infrared signals into individual , green, and (RGB) and NIR images. The separation performance of the proposed method is compared to that of related work in terms of the average peak-signal-to-noise-ratio (PSNR) and color distance. The proposed method attains average PSNR value of 37.04 and 33.29 dB, respectively for the separated RGB and NIR images, which is respectively 6.72 and 2.55 dB higher than the work used for comparison.

Keywords: ; image processing; infrared imaging; near infrared; photography

1. Introduction The widespread use of cameras in daily life has spurred numerous applications. These applications can become even more powerful if they can leverage more useful imaging information at lower cost. Active investigations have been underway to extract unconventional information from the images taken by inexpensive cameras for usage beyond the simple viewing of a photograph. For example, capturing images at non-visible by inexpensive cameras could prove incredibly useful. One such effort of immediate interest is the acquisition of signals ranging 780–1400 nm of near-infrared (NIR) wavelengths [1]. Conventional consumer-level cameras only acquire images in the visible region, typically by using a color filter array (CFA) placed in front of the . To avoid saturation from the accompanying infrared (IR) signal, an IR cut filter is also placed in front of the image sensor [2]. This configuration acquires an image that covers the wavelength range 380–780 nm [1] that the human eye normally sees. However, signals in the infrared range can provide valuable additional information, as indicated by numerous investigations that have been conducted on how to best extract more useful visual information from the infrared signal [3].

Sensors 2020, 20, 5578; doi:10.3390/s20195578 www.mdpi.com/journal/sensors Sensors 2020, 20, 5578 2 of 18

NIR has different spectral characteristics than visible (VIS) wavelengths [2]. Many industrial applications have applied NIR imaging to enhance the quality of images in the visible range. In remote sensing, NIR images are quite effective in identifying forests since vegetation has high reflectivity in the NIR [4]. High NIR penetration into water vapor also explains why NIR images give clear images even in hazy weather conditions, and this characteristic has been successfully exploited to dehaze outdoor scenes [5]. In manufacturing applications, NIR can detect defective products or bruises on fruit [6–8]. In medical imaging, NIR helps to detect veins, which is quite useful for intravenous injections in the arm [9,10]. In surveillance imaging, NIR light sources have been used for night vision imaging without disturbing subjects being imaged. NIR images can also be utilized to reduce noise in visible range images captured in low light conditions [11,12]. Even this small sampling of NIR applications clearly demonstrates that NIR imaging is extremely important. These applications can be further enhanced by the combination of VIS and NIR images. In capturing both VIS and NIR images from a scene, simultaneous capture of both of them by a single system can make matters extremely simple by avoiding necessity of registration. The conventional 2-CCD RGB-IR [13] has been used to simultaneously capture both visible red, green, and blue (RGB), and IR images. Its drawbacks include limited portability and high cost, as the system consists of two sensors and a beam splitter. The two sensors are placed 90 degrees apart to capture the scene at the same position. This dual sensor-based system incurs double hardware production costs and is bigger than a single sensor-based system. To overcome these limitations, research has been devoted to obtain both visible and IR images with a single sensor. For instance, the transverse field detector (TFD)-based VIS and NIR image acquisition method operates under the principle of different spectral transmittance in silicon [14]. However, the TFD-based imaging system is still in its experimental phase. Another notable approach is the novel design of a CFA. For instance, Koyama et al. [15] proposed a photonic crystal color filter (PCCF)-based approach which adds a defect layer in a TiO2 and SiO2 stacked photonic crystal. The thickness of the defect layer controls the band-gap length between the visible and infrared wavelengths. The detector can be designed with an array of four different defect layer thicknesses to obtain R+IR, G+IR, B+IR, and IR bands. Lu et al. [16] proposed an optimal CFA design which simulates to get an optimal CFA pattern with a various number of color filters. Because of the high production cost of PCCFs and Lu’s CFA, it is difficult to utilize them in low-cost consumer devices. Although a more pragmatic camera module has been developed using an RGB-IR CFA [17–20], it replaces one of green band pass filter in the CFA pattern with an IR long pass filter and its production cost is higher than visible range color filters. Sadegnipoor et al. took a different approach that used a single RGB camera, employing a compressive sensing framework and developed a signal separation method [21] to extract the VIS and NIR images from a mixed image signal. The hardware system can be easily implemented by just removing the IR cut filter from a conventional camera. However, the proposed separation method has problems with image quality and heavy computational complexity due to the ill-posed mathematical structure and consequential iterative reconstruction process for compressively sensed data. Our prior work [22] took a similar approach but with the complementary CFA of cyan, magenta, yellow, and green (CMYG) instead of the RGB primary . CMYG CFAs have the benefit of better sensitivity than RGB CFAs because of the wider range of each color filter [23,24]. However, the mathematical model in the prior work considered only the noise-free case. Nontrivial noise was observed in the experimentally separated images [22]. In this paper, we improve on our prior work by re-formulating the starting mathematical model. In the proposed mathematical model, the color space conversion is more rigorously performed by converting from CMYG color space to the device independent color space of the standard CIE1931 XYZ [25]. Moreover, by adding an additive noisy term to the proposed mathematical model, the noise characteristic is analyzed, and subsequently, color-guided filtering [26] is applied to minimize the noise after the proposed separation process. Sensors 2020, 20, 5578 3 of 18 Sensors 2020, 20, x FOR PEER REVIEW 3 of 18

FigureFigure1 1 depicts depicts the the structure structure of of the the proposed proposed method. method. The The proposed proposed imaging imaging systemsystem isis made made byby removingremoving thethe IRIR cutcut filterfilter fromfrom aa conventionalconventional camera.camera. TheThe imageimage isis takentaken underunder severalseveral lightinglighting conditionsconditions whichwhich containcontain visiblevisible andand NIRNIR wavelengths.wavelengths. The proposed imaging system capturescaptures aa monochromemonochrome mosaicmosaic patternpattern imageimage andand eacheach pixelpixel intensityintensity correspondscorresponds toto aa colorcolor filterfilter ofof thethe CMYG CFACFA pattern.pattern. ToTo generategenerate a a color image from from the the monochrome image, image, a demosaicinga process, process, such such as bilinearas bilinear interpolation interpolation [27 [27]] method, method, is applied. is applied. After After the the demosaicing demosaicing process, process, a four-color a four-color channel channel of theof the mixed mixed input input image image is generated, is generated, which which contains contains mixed signalsmixed fromsignals the from visible the and visible NIR . and NIR Inspectrums. the separation In the process, separation the mixed process, input the image mixed is input separated image into is VISseparated and NIR into images. VIS and In the NIR denoising images. process,In the denoising the noise process, in the separated the noise XYZin the and separa NIRted images XYZ and is minimized NIR images by is the minimized color-guided by the filtering color- methodguided filtering [26] which method utilizes [26] the whic four-channelh utilizes inputthe four-channel image as the input guide image image. as Thethe guide separated image. visible The imageseparated in XYZ visible color image space in [ 25XYZ] can color be changedspace [25] to can the colorbe changed space ofto choice,the color such space as theof choice, standard such RGB as (sRGB)the standard [28]. The RGB proposed (sRGB) separation,[28]. The proposed denoising, separa andtion, color denoising, conversion and processes color conversion generate both processes visible andgenerate NIR imagesboth visible using and a conventional NIR images single using imaging a conventional sensor. We single note imaging that the proposedsensor. We method note that allows the simultaneousproposed method acquisition allows simultaneous of both images acquisition and does of not both require images an and additional does not registration require an processadditional to matchregistration different process temporal to match and different spatial viewpoints. temporal and spatial viewpoints.

Figure 1. The proposed visible and near-infrared imagingimaging systemsystem structure.structure.

TheThe remainder ofof thisthis paper is organized as follows. Section 2II addresses addresses the the mathematicalmathematical model model andand thethe proposedproposed separationseparation method.method. ExperimentExperiment resultsresults are are given given in in Section Section3 ,III, and and the the conclusion conclusion is providedis provided in Sectionin Section4. IV.

2.2. Proposed Method 2.1. Mathematical Model 2.1. Mathematical Model The image capturing mechanism used in this paper is exactly the same as conventional camera The image capturing mechanism used in this paper is exactly the same as conventional camera systems. The light rays reflected from a point on the object’s surface goes through the lens. After the systems. The light rays reflected from a point on the object’s surface goes through the lens. After the light rays pass through the lens, an IR cut filter reflects the NIR wavelengths away while only passing light rays pass through the lens, an IR cut filter reflects the NIR wavelengths away while only passing visible wavelengths of light in. The installed CFA in front of the image sensor separates the incoming visible wavelengths of light in. The installed CFA in front of the image sensor separates the incoming wavelengths into the appropriate color channels. The number of channels and colors used in the CFA wavelengths into the appropriate color channels. The number of channels and colors used in the CFA depends on the system characteristics. The light rays passing through the optical lens and filters are depends on the system characteristics. The light rays passing through the optical lens and filters are integrated by the image sensor to form a corresponding pixel value. The mathematical model for each pixel value of the conventional camera system can be represented as

Sensors 2020, 20, 5578 4 of 18 integrated by the image sensor to form a corresponding pixel value. The mathematical model for each pixel value of the conventional camera system can be represented as Z conv. dF (k) = TF(λ, k)E(λ, k)R(λ, k)dλ + eF(k) (1)

λVIS

conv.( ) where F is a specific color channel, k indicates the spatial location of a pixel, dF k is the intensity at the kth pixel position sensed by a sensor of a color channel F, TF(λ, k) is the spectral distribution of transmittance by the color filter array for F, E(λ, k) is the spectral distribution of the light source, R(λ, k) is the spectral distribution of object reflectance in the scene, eF(k) is the temporal dark noise [29] characteristic of the sensor, and λVIS refers to range of visible wavelength. A conventional camera system captures the visible range image only with the help of the IR cut filter, and concurrent capture of the NIR image is possible by removing the IR cut filter. When the IR cut filter is removed, (1) can be changed to Z Z dF(k) = TF(λ, k)E(λ, k)R(λ, k)dλ + TF(λ, k)E(λ, k)R(λ, k)dλ + eF(k) (2)

λVIS λNIR where dF(k) represents the intensity at the kth pixel position sensed by a sensor of a color channel F, and λNIR refers to range of NIR wavelength, and the intensity includes both visible and NIR wavelengths. In this paper, the four-color channels of F C, M, Y, G are employed with a CMYG ∈ { } CFA. The intensity dF(k) sensed by a sensor of a color channel F according to the light integration model of (2) can be written as

VIS NIR dF(k) = dF (k) + dF (k) + eF(k) (3)

VIS( ) NIR( ) where dF k and dF k are the pixel intensities due to visible and NIR wavelengths, respectively. NIR( ) NIR( ) NIR( ) In (3), dF k is the NIR intensity sensed by a sensor of each color channel; that is, dC k , dM k , NIR( ) NIR( ) NIR( ) dY k , and dG k . The NIR intensity sensed in each color channel, dF k , is modeled as NIR( ) = NIR( ) dF k ωFd k where ωF (that is, ωC, ωM, ωY, and ωG) is a weighting factor of the NIR transmittance on each channel of the CFA, and dNIR(k) is the NIR intensity unaffected by the CFA. Using this representation, (3) can be reformulated as

VIS NIR dF(k) = dF (k) + ωFd (k) + eF(k) (4)

More detailed information about the linear relation in (4) will be discussed later along with experimental verification. Equation (4) can be represented in the following matrix form:   dVIS(k)  VIS NIR   C   d (k) + ωCd (k)      C   1 0 0 0 ωC  dVIS(k)   VIS NIR    M   d (k) + ωMd (k)   0 1 0 0 ω    M  =  M  dVIS(k)  (5)  VIS NIR    Y   d (k) + ωYd (k)   0 0 1 0 ωY    Y    dVIS(k)   VIS NIR  0 0 0 1 ωG  G  d (k) + ωGd (k)   G  dNIR(k) 

The matrix form in (5) represents a linear relationship between visible and NIR mixed input and the separated output. The separated four channel CMYG color-based visible output values depend on the device characteristics. Therefore, the CMYG color space is not appropriate to obtain consistent Sensors 2020, 20, 5578 5 of 18 colors on different devices. To produce device independent color in this paper, CMYG color space is converted into the standard CIE1931 XYZ color space [25] with

 VIS   d (k)      C   α11 α12 α13  dVIS(k)  VIS    X   d (k)   α α α    M  =  21 22 23  dVIS(k)   VIS    Y  (6)  d (k)   α31 α32 α33    Y    VIS      d (k)  VIS( )  α41 α42 α43 Z dG k

VIS( ) VIS( ) VIS( ) where the αij’s are the color conversion coefficients, and dX k , dY k , and dZ k are the intensities of the visible image in XYZ color space. Supplementing (6) with the NIR component gives   dVIS(k)  C        α11 α12 α13 0  dVIS(k)  VIS( )    X   dM k   α α α 0      21 22 23  VIS( )   VIS    dY k   d (k)  =  α31 α32 α33 0   (7)  Y    VIS      d (k)   dVIS(k)   α41 α42 α43 0  Z   G    NIR     0 0 0 1  d (k)  dNIR(k) 

By substituting (7) into (5), the following equation can be found:

 VIS NIR   VIS   d (k) + ωCd (k)   d (k)     C   X   α11 α12 α13 ωC   VIS NIR   VIS     d (k) + ωMd (k)   d (k)   α α α ω   M  = C Y  where C =  21 22 23 M  (8)  VIS NIR   VIS     d (k) + ω d (k)   d (k)   α31 α32 α33 ωY   Y Y   Z           VIS( ) + NIR( )   NIR( )  α41 α42 α43 ωG dG k ωGd k d k

In (8), C is called the combination matrix. By applying (8), (3) can be rewritten as

   VIS   dC(k)   d (k)       X   eC(k)     VIS     dM(k)   d (k)   e (k)    = C Y  +  M  (9)    VIS   ( )   dY(k)   d (k)   eY k     Z       NIR  eG(k) dG(k) d (k)

In this paper, the separation matrix, S, is defined as

1 S = C− (10)

Finally, the separated XYZ and NIR pixel intensities at kth position can be found with

 VIS   ˆVIS     d (k)     d (k)   dC(k)   X   eC(k)   X     VIS     VIS   d (k)   d (k)   e (k)   dˆ (k)  S M  =  Y  + S M  =  Y  (11)    VIS     VIS   dY(k)   d (k)   eY(k)   dˆ (k)     Z     Z   ( )     ( )    dG k  dNIR(k)  eG k  dˆNIR(k) 

ˆVIS( ) ˆVIS( ) ˆVIS( ) ˆNIR( ) where dX k , dY k , dZ k , and d k are the separated visible (in XYZ color space) and NIR containing temporal dark noise. The noise analysis and corresponding denoising process will be discussed later.

2.2. Color Conversion to XYZ Color Space In this paper, the CMYG CFA is used to capture the color in the visible wavelengths. The input image has four color channels. On the other hand, the display system controls the color in three Sensors 2020, 20, 5578 6 of 18 channels of RGB color space so that the input image in CMYG color space should be changed into the RGB color space. The conventional way to do this conversion is to change the input intensity values of the sensor to the target standard color space. In this paper, we convert CMYG input to the standard CIE1931 XYZ [25] color space. From (6), the relation between CMYG and XYZ is linear. The color conversion can be done by linear regression [28] by excluding the bias term, making it easy to adapt to the proposed model. In the process of training the color conversion coefficients by linear regression, the selection of the of the light source and the reference target are important. The color temperature of a light source can be affected by the balance of the image. If the color was trained under a 6500K fluorescent light, the white balance is only matched under the same light source. In case of the reference target color, the standard Macbeth color chart [30] is widely used to train the color space conversion coefficients in (6). After finding the color conversion coefficients from CMYG to XYZ space, the converted XYZ space can be converted to RGB color space for display on a device. In this paper, we apply the conversion to sRGB [31] which is the predominantly used color space for display of photos in the internet [32].

2.3. Calculation of the NIR Weighting Coefficient Section 2.2 explained how to derive the color space conversion coefficients in (8). This section describes how to find the value of the NIR weighting coefficients ωC, ωM, ωY, and ωG in (8). The proposed method estimates the weighting coefficients by finding the relative ratio between the coefficients. To find the relative ratio, a weighting coefficient of a specific channel is treated as a reference value. In this paper, we set the NIR intensity in the green channel of input image as the reference. To calculate the coefficients, (4) is slightly rewritten as

NIR ωF NIR NIR ωFd (k) = ωGd (k) = ωFGωGd (k) (12) ωG where ω is the ratio of ωF against ω , that is, ωF/ω , F {C, M, Y, G}, and ω = 1. By reflecting (8), FG G G ∈ GG the combination matrix (C) also can be changed to    α11 α12 α13 ωCGωG     α α α ω ω  =  21 22 23 MG G  C   (13)  α31 α32 α33 ωYGωG    α41 α42 α43 ωG

To find the values of ωCG, ωMG, and ωYG, an experiment was performed. To capture an image which contains only the NIR information, the camera captures a scene which is illuminated with an NIR light source only. The image was taken with an 850 nm NIR light source, so the image only contains reflected NIR light from a white surface. The light source of visible wavelength is absent. By the experiment, (4) by reflecting (13) can be expressed as

NIR dF(k) = ωFGωGd (k) + eF(k) (14)

From (14), it can be estimated that only reflected NIR light is projected onto the sensor after passing through the CMYG CFA. Therefore, the value of the weighting factor only depends on the sensor characteristic and the transmittance of the CMYG CFA. Figure2 depicts scatter plots of NIR intensity between two color channels. It clearly shows a linear relationship between the color channels and the slopes of the linear regressions to the scatter plots imply the values of ωCG, ωMG, and ωYG. Their values are computed by setting the value of ωG as 1, which implies that the amount of NIR light passing through the green color filter is treated as a relative reference value of the output NIR image. Sensors 2020, 20, x FOR PEER REVIEW 6 of 18

2.3. Calculation of the NIR Weighting Coefficient Section 2.2 explained how to derive the color space conversion coefficients in (8). This section describes how to find the value of the NIR weighting coefficients ωC, ωM, ωY, and ωG in (8). The proposed method estimates the weighting coefficients by finding the relative ratio between the coefficients. To find the relative ratio, a weighting coefficient of a specific channel is treated as a reference value. In this paper, we set the NIR intensity in the green channel of input image as the reference. To calculate the coefficients, (4) is slightly rewritten as ω ωωωωNIR==F NIR NIR FGFGGdk() dk () dk () (12) ωG

ω ω ω ωω ω where FG is the ratio of F against G , that is, F G , F ∈ {C, M, Y, G}, and GG = 1. By reflecting (8), the combination matrix (C) also can be changed to αααωω 11 12 13 CG G αααωω C = 21 22 23 MG G ααα ωω (13) 31 32 33 YG G ααα ω 41 42 43 G

To find the values of ωCG, ωMG, and ωYG, an experiment was performed. To capture an image which contains only the NIR information, the camera captures a scene which is illuminated with an NIR light source only. The image was taken with an 850 nm NIR light source, so the image only contains reflected NIR light from a white surface. The light source of visible wavelength is absent. By the experiment, (4) by reflecting (13) can be expressed as =+ωω NIR dkFFGGF() d () k ek () (14)

From (14), it can be estimated that only reflected NIR light is projected onto the sensor after passing through the CMYG CFA. Therefore, the value of the weighting factor only depends on the sensor characteristic and the transmittance of the CMYG CFA. Figure 2 depicts scatter plots of NIR intensity between two color channels. It clearly shows a linear relationship between the color channels and the slopes of the linear regressions to the scatter plots imply the values of ωCG, ωMG, and ωYG. Their values are computed by setting the value of ωG as 1, which implies that the amount of NIR light passing through the green color filter is treated as a relative reference value of the output NIR image. Sensors 2020, 20, 5578 7 of 18

(a) (b) (c) Figure 2. The Near-infrared (NIR) intensity scatter plot between two different color channels Figure 2. The Near-infrared (NIR) intensity scatter plot between two different color channels (a) (a) Green-Cyan, (b) Green-Magenta, and (c) Green-Yellow. Green-Cyan, (b) Green-Magenta, and (c) Green-Yellow. 2.4. Noise Analysis 2.4. Noise Analysis After the separation by multiplying the separation matrix (S) and input image pixels, noise are noticeableAfter inthe the separation separated by visible multiplying and NIR the images separation due tomatrix the following(S) and input noise image characteristic pixels, noise that are is definednoticeable as the in separationthe separated noise visible in this and paper. NIR Equation images (11)due containsto the following the noise modelnoise characteristic after the separation that is process.defined Theas the noise separation in each separated noise in imagethis paper. channel Equation is represented (11) contains as the weightedthe noise sum model of the after input the temporalseparation dark process. noise The and noise each in element each separated in the separation image channel matrix. is Itrepresented implies that as the the separationweighted sum noise of depends on the input temporal dark noise and the separation matrix. Experimental data can elucidate the relationship of noise in the input and the separated images. Figure3 shows a histogram of the reference target taken with the proposed imaging system under fluorescent and 850 nm NIR light. The variance of the captured grey reference target image can be treated as that of the temporal dark noise. In this paper, we assume the noise Gaussian, and the distribution is estimated from the shape of the histogram in Figure3. To check the relationship between the input noise and the separation noise, following weighted sum of Gaussian random variables [33] is calculated. In general, if

2 eF N(µF, σ ), F C, M, Y, G (15) ∼ F ∈ { } then,     = P  P P ( )2 + P ( ) e X,Y,Z,NIR sFeF N sFµF, sFσF 2 sFi sFj Cov eFi , eFj  (16) { } F C,M,Y,G ∼ F C,M,Y,G F C,M,Y,G F ,F C,M,Y,G  ∈{ } ∈{ } ∈{ } i j∈{ }

2 where µF and σF are the mean and variance of the temporal dark noise distribution of input image and N indicates the Gaussian probability density function. SF represents each element in the separation matrix and eF is the temporal dark noise level on the corresponding channel. The “estimation” in Figure3 shows the estimated noise distribution of the separated image using (16) with the input noise distribution. The estimated result exactly matches with the histogram of the separated image. From this result, it is expected that the noise in the separated images are fully derived from the temporal dark noise of the input image. Sensors 2020, 20, x FOR PEER REVIEW 7 of 18 the input temporal dark noise and each element in the separation matrix. It implies that the separation noise depends on the input temporal dark noise and the separation matrix. Experimental data can elucidate the relationship of noise in the input and the separated images. Figure 3 shows a histogram of the grey reference target taken with the proposed imaging system under fluorescent and 850 nm NIR light. The variance of the captured grey reference target image can be treated as that of the temporal dark noise. In this paper, we assume the noise Gaussian, and the distribution is estimated from the shape of the histogram in Figure 3. To check the relationship between the input noise and the separation noise, following weighted sum of Gaussian random variables [33] is calculated. In general, if μσ2 ∈ eNFFF~(, ), FCMYG {,,,} (15) then,

 = ~,()2(,)Nsμσ s2 + ssCovee ese{,,,XYZNIR }  F F FF FF  FFij F i F j (16) FCMYG∈{, ,,} F∈∈{,,,} CMYG F {,,,} CMYG Fij , F ∈ {,,,} CMYG

μ σ 2 where F and F are the mean and variance of the temporal dark noise distribution of input image and N indicates the Gaussian probability density function. SF represents each element in the separation matrix and eF is the temporal dark noise level on the corresponding channel. The "estimation" in Figure 3 shows the estimated noise distribution of the separated image using (16) with the input noise distribution. The estimated result exactly matches with the histogram of the separated image. From this result, it is expected that the noise in the separated images are fully derived from Sensorsthe temporal2020, 20, 5578dark noise of the input image. 8 of 18

FigureFigure 3.3. The noise distribution of the input and the separatedseparated images (in(in images ofof grey surfacesurface objectobject takentaken underunder fluorescentfluorescent andand 850850 nmnm light).light). 2.5. Denoising Method 2.5. Denoising Method In Figure3, it is observed that the variance of noise is larger in the separated images than in the In Figure 3, it is observed that the variance of noise is larger in the separated images than in the input image. To reduce the noise on the separated images, an effective denoising technique should input image. To reduce the noise on the separated images, an effective denoising technique should be applied. In general, we can apply the denoising techniques on the separated images directly. be applied. In general, we can apply the denoising techniques on the separated images directly. The The BM3D filtering [34] is a good example of denoising that can be potentially considered in this BM3D filtering [34] is a good example of denoising that can be potentially considered in this case. In case. In Section 2.4, we discovered that the noise distribution in the separated image is formed by the Section 2.4, we discovered that the noise distribution in the separated image is formed by the weighted sum of the noise in each input image channel. This implies that if the denoising is applied to weighted sum of the noise in each input image channel. This implies that if the denoising is applied the input mixed image, the noises in separated images will be also reduced. However, it is hard to to the input mixed image, the noises in separated images will be also reduced. However, it is hard to estimate the amount of noise in the input image. In a different approach, the input image can be used as estimate the amount of noise in the input image. In a different approach, the input image can be used a guidance image to reduce noise of the separated images. If the output image has a linear relationship as a guidance image to reduce noise of the separated images. If the output image has a linear with a guidance image, the noise can be minimized by deriving the low noise of the guidance image. relationship with a guidance image, the noise can be minimized by deriving the low noise of the A denoising technique called color-guided filtering [26] exploits the linear relationship between the guidance image. A denoising technique called color-guided filtering [26] exploits the linear noisy image and the guide image. If there is a linear relationship between the input and the separated results, the proposed method is well matched to the guided filtering approach. The color-guided filtering method [26] is applied in this paper to minimize the separation noise by applying the input CMYG image as a guide image. According to [26], a color-guided filter with CMYG can be applied by,    dC(k)     d (k)  deVIS(k) = aT (k) M  + b (k) (17) X X   X  dY(k)    dG(k)

eVIS( ) T ( ) ( ) where dX k is the kth pixel of the filtered separated image of X channel. aX k and bX k can be calculated by,   d (i)     C    X  VIS  1 1  dM(i)  ˆVIS ˆ  aX(k) = (Σk + εU)−    dX (i) µkdX (k) (18)  w  dY(i)  −   i wk   | | ∈    dG(i)

b = dˆVIS(k) aT (k)µ (19) k X − X k where aX(k) is a 4 1 coefficient vector, Σk is the 4 4 covariance matrix of the input image, and U is a × × VIS 4 4 identity matrix. µ is the mean of the input image and dˆ (k) is the mean of dˆVIS(k). In case of Y × k X X and Z channels, the same method as the color-guided filtering of the X channel is applied. Sensors 2020, 20, x FOR PEER REVIEW 8 of 18 relationship between the noisy image and the guide image. If there is a linear relationship between the input and the separated results, the proposed method is well matched to the guided filtering approach. The color-guided filtering method [26] is applied in this paper to minimize the separation noise by applying the input CMYG image as a guide image. According to [26], a color-guided filter with CMYG can be applied by,

dkC () dk() dkVIS ()=+aΤ () kM bk () XXdk() X (17) Y dkG ()

Τ VIS th where dkX () is the k pixel of the filtered separated image of X channel. a X ()k and bkX () can be calculated by,

di() C  −1 1 di() VIS VIS =+εμM ˆ −ˆ a Xk()kdidk (Σk U ) XX() () (18) ∈ diY () w iwk  diG () 

=−ˆVIS Τ μ bdkkkXX()a () k (19)

where a X ()k is a 4 × 1 coefficient vector, Σ is the 4 × 4 covariance matrix of the input image, and U Sensors 2020, 20, 5578 9 of 18 μ ˆ VIS ˆ VIS is a 4 × 4 identity matrix. k is the mean of the input image and dkX () is the mean of dkX (). In case of Y and Z channels, the same method as the color-guided filtering of the X channel is applied. FigureFigure 4 shows thethe denoisingdenoising resultsresults of of the the separated separated RGB RGB and and NIR NIR images. images. Wiener Wiener [ 27 [27]] (Chapter (Chapter 5) 5)and and BM3D BM3D [26 [26]] filters filters are are applied applied to theto the separated separated images images and and the the color-guided color-guided filter filter is applied is applied by (17). by (17).Before Before the filteringthe filtering process, process, the noise the noise is seen is seen obviously obviously on the on separated the separated images. images. The WienerThe Wiener filter filterreduces reduces the noise, the noise, but also but sualsoffers suffers from from the e fftheect effect of blurring of blurring along along edges. edges. In BM3D, In BM3D, the edge the looksedge looksclearer clearer than in than the Wienerin the Wiener filter, but filter, the texturebut the on texture leaf in theon leaf separated in theRGB separated image looksRGB image blurrier looks than blurrierin the image than withoutin the image filtering. without The imagefiltering. resulting The image from resulting the guided from filter the looks guided better filter than looks the Wienerbetter thanand BM3Dthe Wiener filters and by BM3D keeping filters the clarity by keeping of the th edgee clarity and theof the texture edge onand leaf. the texture on leaf.

Figure 4. Subjective quality comparison of the denoising methods. The processing time is also shown. Figure 4. Subjective quality comparison of the denoising methods. The processing time is also shown. 3. Experimental Results 3. Experimental Results Sensors 2020, 20, x FOR PEER REVIEW 9 of 18 3.1. Experimental Condition 3.1. Experimental Condition ToTo evaluate evaluate thethe experimentalexperimental results results of of the the proposed proposed method, method, we usedwe used a CMYG a CMYG CFA-based CFA-based camera camerawhich iswhich available is available on the consumeron the consumer market (Figuremarket5 (Figureb). The camera5b). The is camera modified is modified by removing by removing the IR cut the filter IR fromcut filter the camera.from the Figure camera.5a showsFigure the5a shows hardware the ofhardware the IR cut of filterthe IR and cut the filter CMYG and CFA-basedthe CMYG CFA-baseddetector inside detector of the inside camera. of the Test camera. images Test were images taken were under taken three under different three light different sources: light fluorescent sources: fluorescentand 850 nm and NIR 850 light, nm aNIR halogen light, light, a halogen and sunlight. light, and Image sunlight. processing Image software processing is implemented software is implementedaccording to theaccording proposed to method. the proposed The color-guided method. The filter-based color-guided [26] denoising filter-based method [26] is denoising applied to methodboth separated is applied XYZ to andboth NIR separated images. XYZ The separatedand NIR images. XYZ image The isseparated converted XYZ into image the sRGB is converted [31] space intoso that the thesRGB resulting [31] space images so that are the displayed resulting in theimag sRGBes are color displayed space. in After the allsRGB the color processing, space. includingAfter all theCMYG processing, to VIS/ NIRincluding separation, CMYG denoising, to VIS/NIR and separa colortion, conversion, denoising, the and output color images conversion, are referred the output to as imagesthe separated are referred RGB to and as the the separated separated NIR RGB images. and the separated NIR images.

(a) (b)

FigureFigure 5. 5. ImagingImaging system system used used in in the the experiment. experiment. (a ()a )The The IR IR cut cut filter filter and and imaging imaging detector detector in in the the camera,camera, (b (b) )the the camera camera used used in in the the experiment. experiment.

3.2. The Separation of Band Spectrum This experiment shows how well the visible and NIR images are separated from the mixed input image. Band pass filters from 400 to 1000 nm are aligned in a row and the camera captures the light passing through the filters from a halogen light source. Figure 6 shows the experimental environment and the separation results. From the result, the separated RGB image includes wavelengths from 400 to 800 nm. The separated NIR image includes data from 750 to 1000 nm. Due to degradation of spectral sensitivity of silicon-based sensors [35], the results in edge bands (400 nm and 1000 nm) look darker than the other bands. The result shows that the separated RGB and NIR images are overlapped from 750 to 800 nm, but the other bands are clearly separated. From this result, 750 nm is determined to be the starting wavelength of NIR, which is like several other conventional NIR imaging systems.

Figure 6. Experimental results of the separation of a band spectrum (bandwidth of the bandpass filters: 20 nm).

3.3. Separation under a NIR Spot Light To check the separation result under a combination of visible and NIR light sources, the images are captured under both fluorescent light and an 850 nm NIR flashlight. Figure 7 depicts the experimental result. The fluorescent light illuminates the entire area of the scene, but the NIR light shines only on a part of the scene to check the separation correctness of the NIR region. NIR light spots can be noted in the first row of Figure 7 from left to right. From the results, it is observed that

Sensors 2020, 20, x FOR PEER REVIEW 9 of 18

To evaluate the experimental results of the proposed method, we used a CMYG CFA-based camera which is available on the consumer market (Figure 5b). The camera is modified by removing the IR cut filter from the camera. Figure 5a shows the hardware of the IR cut filter and the CMYG CFA-based detector inside of the camera. Test images were taken under three different light sources: fluorescent and 850 nm NIR light, a halogen light, and sunlight. Image processing software is implemented according to the proposed method. The color-guided filter-based [26] denoising method is applied to both separated XYZ and NIR images. The separated XYZ image is converted into the sRGB [31] space so that the resulting images are displayed in the sRGB color space. After all the processing, including CMYG to VIS/NIR separation, denoising, and color conversion, the output images are referred to as the separated RGB and the separated NIR images.

(a) (b)

SensorsFigure2020, 205., 5578Imaging system used in the experiment. (a) The IR cut filter and imaging detector in the10 of 18 camera, (b) the camera used in the experiment.

3.2.3.2. The The Separation Separation of of Band Band Spectrum Spectrum ThisThis experiment experiment shows shows how how well well the the visible visible and and NIR NIR images images are are separated separated from from the the mixed mixed input input image.image. Band Band pass pass filters filters from from 400 400 to to 1000 1000 nm nm are are aligned aligned in in a a row row and and the the camera camera captures captures the the light light passingpassing through through the the filters filters from from a a halogen halogen light light so source.urce. Figure Figure 6 shows the experimental environment andand the the separation results. From thethe result,result, thethe separatedseparated RGB RGB image image includes includes wavelengths wavelengths from from 400 400 to to800 800 nm. nm. The The separated separated NIR NIR image image includes includes data data from from 750 to 750 1000 to nm. 1000 Due nm. to Due degradation to degradation of spectral of spectralsensitivity sensitivity of silicon-based of silicon-based sensors sensors [35], the [35], results the inresults edge in bands edge (400bands nm (400 and nm 1000 and nm) 1000 look nm) darker look darkerthan the than other the bands.other bands. The result The result shows shows that the that separated the separated RGB andRGB NIR and imagesNIR images are overlapped are overlapped from from750 to 750 800 to nm, 800 butnm, the but other the other bands bands are clearly are clearly separated. separated. From From thisresult, this result, 750 nm 750 is nm determined is determined to be tothe be starting the starting wavelength wavelength of NIR, of NIR, which which is like is several like several other other conventional conventional NIR imagingNIR imaging systems. systems.

FigureFigure 6. Experimental resultsresults of of the the separation separation of aof band a band spectrum spectrum (bandwidth (bandwidth of the of bandpass the bandpass filters: filters:20 nm). 20 nm). 3.3. Separation under a NIR Spot Light 3.3. Separation under a NIR Spot Light To check the separation result under a combination of visible and NIR light sources, the images are To check the separation result under a combination of visible and NIR light sources, the images captured under both fluorescent light and an 850 nm NIR flashlight. Figure7 depicts the experimental are captured under both fluorescent light and an 850 nm NIR flashlight. Figure 7 depicts the result. The fluorescent light illuminates the entire area of the scene, but the NIR light shines only experimental result. The fluorescent light illuminates the entire area of the scene, but the NIR light Sensorson a part 2020, of 20, the x FOR scene PEER to REVIEW check the separation correctness of the NIR region. NIR light spots can10 of be18 shines only on a part of the scene to check the separation correctness of the NIR region. NIR light noted in the first row of Figure7 from left to right. From the results, it is observed that the separated spots can be noted in the first row of Figure 7 from left to right. From the results, it is observed that theRGB separated images are RGB similar images because are similar the fluorescent because the light fluorescent illumination light is identicalillumination in all is images.identical On in theall images.other hand, On the separatedother hand, NIR the images separated show NIR diff erentimages results, show implyingdifferent thatresults, the implying proposed that method the proposedsuccessfully method separates successfully the visible separates and NIR the images visibl evene and if NIR the brightnessimages even between if the brightness the visible between and NIR thelight visible sources and are NIR changed. light sources are changed.

Figure 7.7.The The separation separation experimental experimental result result under unde fluorescentr fluorescent and 850 and nm NIR850 spotnm lightNIR illuminating spot light illuminatingfour different four locations different from locations left to right. from left to right.

3.4. Separation Results on Applications To evaluate the characteristics of the separated visible and NIR images, images were taken under several light sources. From the separation results, we can check how the separated images represent the characteristics of each spectrum band from the subjects. Figure 8 depicts the separation results from three different scenes. The separated NIR images are indicative of the special characteristics of NIR light. Counterfeit money detection: The first row in Figure 8 shows the separation results on the image of a Korean banknote. The separated NIR image contains only texture information from certain parts of the bill. This NIR characteristic has been used to detect counterfeit banknotes [36]. Different NIR of subjects: The second row in Figure 8 was taken outside under sunlight. The scene consists of three different parts: the sky at the top, the building and trees in the middle, and the lake at the bottom. From the separated NIR image, the sky and the lake look relatively darker than the tree objects. According to the Rayleigh scattering [37], the wavelength of NIR is longer than the visible light, so the sky looks dark in the NIR image. At the bottom side, the lake looks dark in the separated NIR image because the water absorption of NIR light is higher than that of visible light [38,39]. On the other hand, the trees look brighter because the spectral reflectance of the NIR is higher than the visible light from the leaves of trees [40]. This characteristic has been applied to remote sensing applications for vegetation searches in a region [4]. Medical application: As one examples of medical applications, NIR images can be utilized to detect veins. The separated RGB image in the third row of Figure 8 shows a picture of an arm in which it is hard to detect the veins. On the other hand, the separated NIR image clearly indicates the shape of veins in the arm. NIR light reflection from the vein inside skin depends on the light intensity and thickness of tissue [41]. This application has been used for non-invasive medical diagnosis [9,10,42].

Sensors 2020, 20, 5578 11 of 18

3.4. Separation Results on Applications To evaluate the characteristics of the separated visible and NIR images, images were taken under several light sources. From the separation results, we can check how the separated images represent the characteristics of each spectrum band from the subjects. Figure8 depicts the separation results from three different scenes. The separated NIR images are indicative of the special characteristics of NIRSensors light. 2020, 20, x FOR PEER REVIEW 11 of 18

Figure 8. Examples of potential applications of the proposed method (Note: “SPECIMEN” text at Figure 8. Examples of potential applications of the proposed method (Note: “SPECIMEN” text at banknote is overlapped on the images). banknote is overlapped on the images). Counterfeit money detection: The first row in Figure8 shows the separation results on the image 3.5. Objective Quality Comparison of a Korean banknote. The separated NIR image contains only texture information from certain parts of theIn bill. this This paper, NIR objective characteristic quality has performance been used to of detect the proposed counterfeit separation banknotes method [36]. is compared with Dia representativefferent NIR reflectance VIS-NIR of separation subjects: The method second which row inis Figurebased8 on was compressive taken outside sensing under (RGB-CS) sunlight. The[21]. sceneThe imaging consists system of three used diff inerent this parts:paper theis CMYG sky at CFA-based; the top, the however, building the and work trees for in comparison the middle, and[21] thewas lake applied at the bottom.to an RGB From CF theA-based separated imaging NIR image, system. the skyTo andminimize the lake the look differences relatively darkerin the thanexperimental the tree objects. conditions According due to tothe the hardware Rayleigh scatteringdifference [37of ],the the imaging wavelength systems, of NIR a issimulation longer than is theestablished visible light,to compare so the skythe objective looks dark performance in the NIR between image. At the the two bottom different side, approaches. the lake looks Figure dark 9 inshows the separatedthe simulation NIR imagestructure because for the the objective water absorption quality measure. of NIR light is higher than that of visible light [38,39]. On the other hand, the trees look brighter because the spectral reflectance of the NIR is higher than the visible light from the leaves of trees [40]. This characteristic has been applied to remote sensing applications for vegetation searches in a region [4]. Medical application: As one examples of medical applications, NIR images can be utilized to detect veins. The separated RGB image in the third row of Figure8 shows a picture of an arm in which it is hard to detect the veins. On the other hand, the separated NIR image clearly indicates the shape

Figure 9. Block diagram of simulation method for the measurement of objective quality.

Sensors 2020, 20, x FOR PEER REVIEW 11 of 18

Sensors 2020, 20, 5578 12 of 18

of veinsFigure in the8. Examples arm. NIR of light potential reflection applicat fromions the of veinthe proposed inside skin method depends (Note: on “SPECIMEN” the light intensity text at and banknote is overlapped on the images). thickness of tissue [41]. This application has been used for non-invasive medical diagnosis [9,10,42].

3.5. Objective Quality Comparison InIn thisthis paper,paper, objective objective quality quality performance performance of theof the proposed proposed separation separation method method is compared is compared with awith representative a representative VIS-NIR VIS-NIR separation separation method method which which is based is based on compressive on compressive sensing sensing (RGB-CS) (RGB-CS) [21]. The[21]. imaging The imaging system system used used in this in paperthis paper is CMYG is CMYG CFA-based; CFA-based; however, however, the work the work for comparison for comparison [21] was[21] appliedwas applied to an RGBto an CFA-based RGB CFA-based imaging system.imaging To system. minimize To theminimize differences the indifferences the experimental in the conditionsexperimental due conditions to the hardware due ditoff theerence hardware of the imaging difference systems, of the a simulation imaging systems, is established a simulation to compare is theestablished objective to performance compare the betweenobjective the performance two different between approaches. the two Figuredifferent9 shows approaches. the simulation Figure 9 structureshows the for simulation the objective structure quality fo measure.r the objective quality measure.

Figure 9. Block diagram of simulation method for the measurement of objective quality. Figure 9. Block diagram of simulation method for the measurement of objective quality. To minimize differences between the method proposed in this work and the method used for comparison, mixed input images are generated from a pre-captured RGB and NIR image dataset [43]. The dataset was captured by a conventional with and without its IR cut filter removed. The original RGB images were captured with an IR cut off filter and the original NIR images were captured with IR long pass filter (and its IR cutoff filter removed). The input images were generated in two different ways. In the case of the proposed method, the color space of the input RGB image is converted to four channel CMYG color space. The converted CMYG image is then added to the NIR image to obtain the mixed input image. The weighting coefficients are pre-calculated, and the values are applied in the separation process. The mixed input image is converted to the mosaiced input image and the mosaiced input image is used as the input of the separation process. In case of the method used for comparison, the RGB and NIR inputs are added together using the mathematical representation of compressive sensing (RGB-CS) [21] and the pre-calculated weighting coefficients are applied in the same manner as the proposed method. In this paper, we implemented the separation process of RGB-CS by noting the description in the paper [21]. The performance of separated RGB and NIR images are measured with the input RGB and NIR images in terms of the peak-signal-to-noise-ratio (PSNR) [44] and color distance. PSNR comparison: Table1 shows the simulation condition. In this paper, 54 sample RGB and NIR images are used for the simulation. Figure 10 depicts the PSNR comparison graphs between the two methods. From the simulation results, the PSNR of the proposed method is higher than that of RGB-CS. The average PSNR of the separated RGB and NIR images of the proposed method is 37.04 dB and 33.29 dB, respectively. On the other hand, the average PSNR of the separated RGB and NIR images of RGB-CS is 30.32 dB and 30.74 dB, respectively. According to the average PSNR comparison, the objective quality of the proposed method is 6.72 and 2.55 dB higher, respectively, than RGB-CS. Sensors 2020, 20, x FOR PEER REVIEW 12 of 18

To minimize differences between the method proposed in this work and the method used for comparison, mixed input images are generated from a pre-captured RGB and NIR image dataset [43]. The dataset was captured by a conventional digital camera with and without its IR cut filter removed. The original RGB images were captured with an IR cut off filter and the original NIR images were captured with IR long pass filter (and its IR cutoff filter removed). The input images were generated in two different ways. In the case of the proposed method, the color space of the input RGB image is converted to four channel CMYG color space. The converted CMYG image is then added to the NIR image to obtain the mixed input image. The weighting coefficients are pre-calculated, and the values are applied in the separation process. The mixed input image is converted to the mosaiced input image and the mosaiced input image is used as the input of the separation process. In case of the method used for comparison, the RGB and NIR inputs are added together using the mathematical representationSensors 2020, 20, 5578 of compressive sensing (RGB-CS) [21] and the pre-calculated weighting coefficients13 of 18 are applied in the same manner as the proposed method. In this paper, we implemented the separation process of RGB-CS by noting the description in the paper [21]. The performance of Table 1. The Simulation Conditions. separated RGB and NIR images are measured with the input RGB and NIR images in terms of the peak-signal-to-noise-ratioMethod (PSNR) [44] and color RGB-CS distance. [21] The Proposed Method PSNR comparison:Sample image Table info. 1 shows the simulation 54 images (508condition.768–1024 In this762) paper, [43] 54 sample RGB and × × NIR images are Typeused offor CFA the1 simulation. Figure RGB10 depicts the PSNR comparison CMYG graphs between the two methods. SeparationFrom the simulation matrix results, the PSNR of Pre-definedthe proposed method is higher than that of RGB-CS. The averageSparsifying PSNR matrix of the separated Discrete cosineRGB and transform NIR images of the proposed N/A method is 37.04 Bilinear Bilinear dB and 33.29 dB,Demosaicing respectively. On the other hand, the average PSNR of the separated RGB and NIR (GRBG pattern) (GMYC pattern) images of RGB-CSSeparation is 30.32 method dB and 30.74 SL0 dB, sparse respec decompositiontively. According [45] to Matrix the average multiplication PSNR comparison, the objective1 RGB means quality that of the the input propos imageed is generatedmethod is by 6.72 considering and 2.55 the dB RGB higher, CFA-based respectively, system and than CMYG RGB-CS. is by considering the CMYG-CFA.

(a) (b)

Figure 10. Peak-signal-to-noise-ratio (PSNR) comparison of RGB-CS [21] and the proposed method Figure 10. Peak-signal-to-noise-ratio (PSNR) comparison of RGB-CS [21] and the proposed method (a) The separated RGB image, (b) the separated NIR image. (a) The separated RGB image, (b) the separated NIR image. Color distance comparison: To compare color differences between two images, color distance between two colors in sRGB, XYZ,Table and CIELab1. The Simulation [46] color Conditions. spaces are calculated. In this paper, the color distance is calculatedMethod by following average RGB-CS of Euclidean [21] distance: The Proposed Method Sample image info. 54 images (508 × 768–1024 × 762) [43] N q P org. sep. 2 org. sep. 2 org. sep 2 = 1 ( 1 ) + ( ) + ( ) CDRGBTypeN of CFAR i Ri Gi RGBGi Bi Bi CMYG i=1 − − − SeparationN qmatrix Pre-defined = 1 P ( org. sep.)2 + ( org. sep.)2 + ( org. sep)2 CDSparsifyingXYZ N matrixXi Xi DiscreteYi cosineYi transformZi Zi N/A i=1 − − − N q Bilinear Bilinear P org. sep. 2 org. sep. 2 org. sep 2 CD Demosaicing= 1 ( L L ) + (a a ) + (b b ) CIELab N i − i (GRBGi − pattern)i i − i (GMYC pattern) i=1 (20) Separation methodM q SL0 sparse decomposition [45] Matrix multiplication histogram = 1 P ( org. sep.)2 + ( org. sep.)2 + ( org. sep)2 1 CDRGB M Rhist.i Rhist.i Ghist.i Ghist.i Bhist.i Bhist.i RGB means that thei =input1 image is generated− by considering− the RGB CFA-based system− and CMYG is by considering the MCMYG-CFA.q histogram = 1 P ( org. sep.)2 + ( org. sep.)2 + ( org. sep)2 CDXYZ M Xhist.i Xhist.i Yhist.i Yhist.i Zhist.i Zhist.i Color distance comparison:i=1 To compare− color differences− between two images,− color distance M q histogram P org. sep. 2 org. sep. 2 org. sep 2 between twoCD colors =in 1sRGB, (XYZ,Lhist. and CIELabLhist. ) [46]+ ( colorahist. spacesahist are. calculated.) + (bhist. In thisbhist paper,. ) the color CIELab M i − i i − i i − i distance is calculated byi= follow1 ing average of Euclidean distance:

where CDRGB, CDXYZ, and CDCIELab are the color distance in each sRGB, XYZ, and CIELab color space, histogram histogram histogram respectively. CD , CD , and CD are the color distance of histogram in each color RGB XYZ CIELab space. N is the number of pixels in image and M is the number of bins in the histogram. The bin size of histogram is 256 in this paper. In case of NIR image, the intensity distance is also calculated by

N = 1 P ( org. sep.) IDNIR N NIRi NIRi = − i 1 (21) M histogram = 1 P ( org. sep.) IDNIR M NIRhist.i NIRhist.i i=1 −

histogram where IDNIR and IDNIR are the intensity distance of NIR images and its histogram, respectively. Figure 11 depicts results of color distance comparison between RGB-CS and the proposed method. The results show that the proposed method has lower color distance value than RGB-CS method. Sensors 2020, 20, x FOR PEER REVIEW 13 of 18

N =−+−+−1 org..2..2. sep org sep org sep 2 CDRGB ()()() R i R i G i G i B i B i N i=1 N =−+−+−1 org..2..2. sep org sep org sep 2 CDXYZ ()()() X i X i Y i Y i Z i Z i N i=1 N =−+−+−1 org..2..2. sep org sep org sep 2 CDCIELab ()()() L ii L a ii a b ii b N i=1 M (20) histogram=−+−+−1 org..2..2. sep org sep org sep 2 CDRGB  (.Rhistii Rhist .)(. Ghist ii Ghist .)(. Bhist ii Bhist .) M i=1 M histogram=−+−+−1 org..2..2. sep org sep org sep 2 CDXYZ ( Xhist . i Xhist .)( i Yhist . i Yhist .)( i Zhist . i Zhist .) i M i=1 M histogram=−+−+−1 org..2..2. sep org sep org sep 2 CDCIELab  (Lhist .ii Lhist .)( ahist . ii ahist .)( bhist . ii bhist .) M i=1

where CDRGB , CDXYZ , and CDCIELab are the color distance in each sRGB, XYZ, and CIELab color space, histogram histogram histogram respectively. CDRGB , CDXYZ , and CDCIELab are the color distance of histogram in each color space. N is the number of pixels in image and M is the number of bins in the histogram. The bin size of histogram is 256 in this paper. In case of NIR image, the intensity distance is also calculated by

N =−1 org.. sep IDNIR() NIR i NIR i N = i 1 M (21) histogram=−1 org.. sep IDNIR(. NIRhist i NIRhist .) i M i=1

histogram where IDNIR and IDNIR are the intensity distance of NIR images and its histogram, respectively. Figure 11 depicts results of color distance comparison between RGB-CS and the proposed Sensorsmethod.2020 The, 20, 5578results show that the proposed method has lower color distance value than RGB-CS14 of 18 method. This means that the between original and separated images of the proposed Thismethod means looks that more the similar color di thanfference RGB-CS between method. original and separated images of the proposed method looks more similar than RGB-CS method.

(a) (b)

Sensors 2020, 20, x FOR PEER REVIEW 14 of 18

(c) (d)

(e) (f)

(g) (h)

Figure 11. Color distance comparison of RGB-CS [21] and the proposed method (a) sRGB space, Figure 11. Color distance comparison of RGB-CS [21] and the proposed method (a) sRGB space, (b) (b) Histogram in sRGB space, (c) XYZ space, (d) Histogram in XYZ space, (e) CIELab space, (f) Histogram Histogram in sRGB space, (c) XYZ space, (d) Histogram in XYZ space, (e) CIELab space, (f) Histogram in CIELab space, (g) intensity distance of NIR image, (h) Histogram distance of NIR intensity. in CIELab space, (g) intensity distance of NIR image, (h) Histogram distance of NIR intensity. Figure 12 depicts the subjective quality comparisons between RGB-CS and the proposed method. In FigureFigure 12 ,12 both depicts methods the showsubjective separated quality RGB comparisons and NIR images between pretty RGB-CS well, butand inthe case proposed of RGB, method.more diff Inerences Figure were 12, both found methods between show RGB-CS separated and RGB the ground and NIR truth images images. pretty The well, RGB but resultsin case byof RGB,RGB-CS more looks differences sharper were than found the proposed between method, RGB-CS butand careful the ground comparison truth images. with the The ground RGB results truth byreveals RGB-CS that looks the high sharper frequencies than the of proposed RGB-CS aremethod over-emphasized, but careful comparison (see the cloud with in the imagesground intruth the reveals that the high frequencies of RGB-CS are over-emphasized (see the cloud in the images in the first row, for example). It means that the proposed method generates separated results more faithful to the ground truth.

Sensors 2020, 20, 5578 15 of 18

first row, for example). It means that the proposed method generates separated results more faithful to theSensors ground 2020, 20 truth., x FOR PEER REVIEW 15 of 18

Figure 12. The subjective quality comparison between RGB-CS [21] and the proposed method (Note: theFigure result 12. images The subjective were encoded quality by comparison portable network between graphics RGB-CS (PNG) [21] and format). the proposed method (Note: the result images were encoded by portable network graphics (PNG) format). 4. Conclusions 4. ConclusionsIn this paper, visible and NIR image separation from a single image is proposed. The image is capturedIn this with paper, a CMYG visible CFA-based and NIR image camera separation that is modified from a bysingle removing image theis proposed. IR cut filter The in image frontof is thecaptured detector. with The a CMYG proposed CFA-based method iscamera performed that is in modi a simplefied wayby removing by multiplying the IR acut separation filter in front matrix of withthe detector. the input The pixels. proposed After method the separation, is performed the separated in a simple XYZ way can by be multiplying converted intoa separation a chosen matrix color spacewith the for input display, pixels. in this After work, the the separation, sRGB color the space separated is used. XYZ To can reduce be converted the noise afterinto thea chosen separation color process,space for we display, analyzed in this the work, noise the characteristics sRGB color inspac thee separatedis used. To images reduce andthe noise a color-guided after the separation filter [26] wasprocess, applied we analyzed for denoising the noise using characteristics the CMYG input in the as separated the guided images image. and The a color-guided experimental filter results [26] bywas testing applied with for several denoising band using pass the filters CMYG and input light sourcesas the guided show thatimage. the The visible experimental and NIR images results are by successfullytesting with separated.several band The pass use offilters the proposedand light methodsources isshow also that implemented the visible for and three NIR applications: images are counterfeitsuccessfully money, separated. vegetation, The use and of veinthe proposed detection. me Tothod measure is also the implemented objective quality for three in terms applications: of PSNR, thecounterfeit separation money, performance vegetation, of theand proposedvein detection. method To andmeasure RGB-CS the objective [21] is simulated. quality in Theterms simulation of PSNR, resultsthe separation show that performance the proposed of methodthe proposed achieved method 6.72 and 2.55RGB-CS dB higher [21] is PSNRsimulated. than RGB-CSThe simulation on the separatedresults show RGB that and the NIR proposed images, method respectively. achieved 6.72 and 2.55 dB higher PSNR than RGB-CS on the separatedDiscussion RGB and futureNIR images, work: Inrespectively. this paper, we proposed four channels of visible and NIR separation fromDiscussion CMYG CFA. and However, future therework: is In a limitationthis paper, to we the proposed dynamic rangefour ofchannels sensor andof visible bit-depth and if NIR the mixtureseparation of thefrom color CMYG bands CFA. contains However, more channelsthere is athan limitation can be to separated. the dynamic A novel range approach of sensor should and bit- be inventeddepth if tothe overcome mixture thisof the limitation color bands as future contains work. more Nevertheless, channels thethan proposed can be separated. separation A method novel canapproach be improved should withbe invented better color to overcome filter array this mixtures limitation by as increasing future work. the numberNevertheless, of equations. the proposed If the separation method can be improved with better mixtures by increasing the number of equations. If the spectral bands need to be increased for a certain application, the proposed method might be extended to capture more color bands in a single sensor, which needs to be investigated as future work as well.

Sensors 2020, 20, 5578 16 of 18 spectral bands need to be increased for a certain application, the proposed method might be extended to capture more color bands in a single sensor, which needs to be investigated as future work as well.

Author Contributions: Conceptualization, Y.P. and B.J.; methodology, Y.P. and B.J.; software, Y.P.; validation, Y.P.; formal analysis, Y.P.; investigation, Y.P. and B.J.; resources, Y.P.; data curation, Y.P.; writing—original draft preparation, Y.P.; writing—review and editing, B.J.; visualization, Y.P.; supervision, B.J.; project administration, B.J.; funding acquisition, B.J. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by the Korea government (MSIT), grant number 2018-0-00348. Acknowledgments: This work was supported in part by an Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2018-0-00348, Development of Intelligent Video Surveillance Technology to Solve the Problem of Deteriorating Arrest Rate by Improving CCTV Constraints). Conflicts of Interest: The authors declare no conflict of interest.

References

1. ISO 20473:2007(E) Optics and Photonics–Spectral Bands; ISO: Geneva, Switzerland, 2007. 2. Süsstrunk, S.; Fredembach, C. Enhancing the Visible with the Invisible: Exploiting Near- Infrared to Advance Computational Photography and Computer Vision. SID Symp. Dig. Tech. Pap. 2010, 41, 90–93. [CrossRef] 3. Varjo, S.; Hannuksela, J.; Alenius, S. Comparison of near infrared and visible image fusion methods. In Proceedings of the International Workshop on Applications Systems and Services for Camera Phone Sensing (MobiPhoto2011), Penghu, Taiwan, 12 June 2011; University of Oulu: Penghu, Taiwan, 2011; pp. 7–11. 4. Gao, B.C. NDWI-A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [CrossRef] 5. Schaul, L.; Fredembach, C.; Süsstrunk, S. Color image dehazing using the near-infrared. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 1629–1632. 6. Nicolai, B.M.; Beullens, K.; Bobelyn, E.; Peirs, A.; Saeys, W.; Theron, K.I.; Lammertyn, J. Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review. Postharvest Biol. Technol. 2007, 46, 99–118. [CrossRef] 7. Ariana, D.P.; Lu, R.; Guyer, D.E. Near-infrared hyperspectral reflectance imaging for detection of bruises on pickling cucumbers. Comput. Electron. Agric. 2006, 53, 60–70. [CrossRef] 8. Mehl, P.M.; Chen, Y.R.; Kim, M.S.; Chan, D.E. Development of hyperspectral imaging technique for the detection of apple surface defects and contaminations. J. Food Eng. 2004, 61, 67–81. [CrossRef] 9. Crisan, S.; Tarnovan, I.G.; Crisan, T.E. A Low Cost Vein Detection System Using Near Infrared Radiation. In Proceedings of the 2007 IEEE Sensors Applications Symposium, San Diego, CA, USA, 6–8 February 2007; pp. 1–6. 10. Miyake, R.K.; Zeman, R.D.; Duarte, F.H.; Kikuchi, R.; Ramacciotti, E.; Lovhoiden, G.; Vrancken, C. Vein imaging: A new method of near infrared imaging, where a processed image is projected onto the skin for the enhancement of vein treatment. Dermatol. Surg. 2006, 32, 1031–1038. [CrossRef][PubMed] 11. Matsui, S.; Okabe, T.; Shimano, M.; Sato, Y. Image enhancement of low-light scenes with near-infrared flash images. Inf. Media Technol. 2011, 6, 202–210. 12. Zhuo, S.; Zhang, X.; Miao, X.; Sim, T. Enhancing Low Light Images using Near Infrared Flash Images. In Proceedings of the 2010 IEEE 17th International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 2537–2540. 13. JAI’s 2-CCD Camera. Available online: https://www.jai.com/products/ad-080-cl (accessed on 28 September 2020). 14. LangfeJder, G.; Malzbender, T.; Longoni, A.F.; Zaraga, F. A device and an algorithm for the separation of visible and near infrared signals in a monolithic silicon sensor. In Proceedings of the SPIE 7882. Visual Information Processing and Communication II, San Francisco Airport, CA, USA, 25–26 January 2011; International Society for Optics and Photonics: Washington, DC, USA, 2011; p. 788207. 15. Koyama, S.; Inaba, Y.; Kasano, M.; Murata, T. A day and night vision MOS imager with robust photonic-crystal-based RGB-and-IR. IEEE Trans. Electron Devices 2008, 55, 754–759. [CrossRef] Sensors 2020, 20, 5578 17 of 18

16. Lu, Y.M.; Fredembach, C.; Vetterli, M.; Süsstrunk, S. Designing color filter arrays for the joint capture of visible and near-infrared images. In Proceedings of the 2009 IEEE International Conference on Image Processing, Cairo, Egypt, 7–10 November 2009; pp. 3797–3800. 17. OmniVision’s RGB-IR CMOS Sensor. Available online: https://www.ovt.com/sensors/OV2736 (accessed on 28 September 2020). 18. Monno, Y.; Teranaka, H.; Yoshizaki, K.; Tanaka, M.; Okutomi, M. Single-Sensor RGB-NIR Imaging: High-Quality System Design and Prototype Implementation. IEEE Sens. J. 2019, 19, 497–507. [CrossRef] 19. Hu, X.; Heide, F.; Dai, Q.; Wetzstein, G. Convolutional Sparse Coding for RGB+NIR Imaging. IEEE Trans. Image Process. 2018, 27, 1611–1625. [CrossRef] 20. Park, C.; Kang, M.G. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition. Sensors 2016, 16, 719. [CrossRef][PubMed] 21. Sadeghipoor, Z.; Lu, Y.M.; Süsstrunk, S. A novel compressive sensing approach to simultaneously acquire color and near-infrared images on a single sensor. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 1646–1650. 22. Park, Y.; Shim, H.J.; Dinh, K.Q.; Jeon, B. Visible and Near-Infrared Image Separation from CMYG Color Filter Array based Sensor. In Proceedings of the 58th International Symposium ELMAR, Zadar, Croatia, 12–14 September 2016; pp. 209–212. 23. Baer, R.L.; Holland, W.D.; Hoim, J.; Vora, P. A Comparison of Primary and Complementary Color Filters for CCD based Digital Photography. In Sensors, Cameras, and Applications for Digital Photography; International Society for Optics and Photonics: Washington, DC, USA, 1999; pp. 16–25. 24. Axis Communications, CCD and CMOS Sensor Technology. Technical White Paper; Axis Communications: Waterloo, ON, Canada, 2011. 25. Smith, T.; Guild, J. The C.I.E. colorimetric standards and their use. Trans. Opt. Soc. 1931, 33, 73–134. [CrossRef] 26. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [CrossRef][PubMed] 27. Gonzalez, R.C.; Woods, R.E. , 3rd ed.; Pearson: Upper Saddle River, NJ, USA, 2010; pp. 57–125. 28. Pointer, M.R.; Attridge, G.G.; Jacobson, R.E. Practical camera characterization for colour measurement. Imaging Sci. J. 2001, 49, 63–80. [CrossRef] 29. Nakamura, J. Image Sensors and Signal Processing for Digital Still Cameras; CRC Press: Boca Raton, FL, USA, 2006. 30. ColorChecker. Product No. 50105 (Standard) or No. 50111 (Mini); Manufactured by the Munsell Color Services Laboratory of GretagMacbeth; X-Rite: Grand Rapids, MI, USA, 1976. 31. International Electrotechnical Commission. Multimedia Systems and Equipment–Colour Measurement and Management–Part 2-1: Colour Management–Default RGB Colour Space–sRGB; IEC 61966-2-1; International Electrotechnical Commission: Geneva, Switzerland, 1999. 32. Stokes, M.; Anderson, M.; Chandrasekar, S.; Motta, R. A Standard Default Color Space for the Internet-sRGB. W3C Document 1996. Available online: https://www.w3.org/Graphics/Color/sRGB.html (accessed on 29 September 2020). 33. Johnson, R.A.; Wichern, D.W. Matrix algebra and random vectors. In Applied Multivariate Statistical Analysis, 6th ed.; Pearson: Upper Saddle River, NJ, USA, 2007; pp. 49–110. 34. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. BM3D Image Denoising with Shape-Adaptive Principal Component Analysis. In SPARS’09-Signal Processing with Adaptive Sparse Structured Representations; Inria Rennes-Bretagne Atlantique: Saint Malo, France, 2009. 35. Darmont, A. Spectral Response of Silicon Image Sensor. In Aphesa White Paper; Aphesa: Harzé, Belgium, 2009. 36. Bruna, A.; Farinella, G.M.; Guarnera, G.C.; Battiato, S. Forgery Detection and Value Identification of Euro Banknotes. Sensors 2013, 13, 2515–2529. [CrossRef][PubMed] 37. Young, A.T. Rayleigh scattering. Phys. Today 1982, 35, 42–48. [CrossRef] 38. Pope, R.M.; Fry, E.S. Absorption spectrum (380–700 nm) of pure water. II. Integrating cavity measurements. Appl. Opt. 1997, 36, 8710–8723. [CrossRef] 39. Kou, L.; Labrie, D.; Chýlek, P. Refractive indices of water and ice the 0.65- to 2.5-µm spectral range. Appl. Opt. 1993, 32, 3531–3540. [CrossRef] Sensors 2020, 20, 5578 18 of 18

40. Peñuelas, J.; Filella, I. Visible and near-infrared reflectance techniques for diagnosing plant physiological status. Trends Plant Sci. 1998, 3, 151–156. [CrossRef] 41. Kim, D.; Kim, Y.; Yoon, S.; Lee, D. Preliminary Study for Designing a Novel Vein-Visualizing Device. Sensors 2017, 17, 304. [CrossRef] 42. Arsalan, M.; Naqvi, R.A.; Kim, D.S.; Nguyen, P.H.; Owais, M.; Park, K.R. IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors. Sensors 2018, 18, 1501. [CrossRef][PubMed] 43. , M.; Süsstrunk, S. Multispectral SIFT for Scene Category Recognition. In Proceedings of the Computer Vision and Pattern Recognition (CVPR 2011), Providence, RI, USA, 20–25 June 2011; IEEE Computer Society: Washington, DC, USA, 2011; pp. 177–184. 44. Sayood, K. Mathematical preliminaries for lossy coding. In Introduction to Data Compression, 3rd ed.; Morgan Kaufmann: San Francisco, CA, USA, 2006; pp. 195–226. 45. Mohimani, H.; Babaie-Zadeh, M.; Jutten, C. A fast approach for overcomplete sparse decomposition based on smoothed norm. IEEE Trans. Signal Process. 2009, 57, 289–301. [CrossRef] 46. ISO 11664–4:2008(E)/CIE S 014-4/E: Joint ISO/CIE Standard -Part4: CIE 1976 L*a*b* Colour Space; ISO: Geneva, Switzerland, 2007.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).