<<

University of Huddersfield Repository

Liu, Xiaohong, Huang, Shujun, Zhang, Zonghua, Gao, F. and Jiang, Xiang

Full­field calibration of chromatic aberration using absolute phase maps

Original Citation

Liu, Xiaohong, Huang, Shujun, Zhang, Zonghua, Gao, F. and Jiang, Xiang (2017) Full­field calibration of color camera chromatic aberration using absolute phase maps. Sensors, 17 (5). ISSN 1424­8220

This version is available at http://eprints.hud.ac.uk/id/eprint/33177/

The University Repository is a digital collection of the research output of the University, available on Open Access. Copyright and Moral Rights for the items on this site are retained by the individual author and/or other copyright owners. Users may access full items free of charge; copies of full text items generally can be reproduced, displayed or performed and given to third parties in any format or medium for personal research or study, educational or not­for­profit purposes without prior permission or charge, provided:

• The authors, title and full bibliographic details is credited in any copy; • A hyperlink and/or URL is included for the original metadata page; and • The content is not changed in any way.

For more information, including our policy and submission procedure, please contact the Repository Team at: [email protected].

http://eprints.hud.ac.uk/ sensors

Article Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps

Xiaohong Liu 1, Shujun Huang 1, Zonghua Zhang 1,2,*, Feng Gao 2 and Xiangqian Jiang 2

1 School of Mechanical Engineering, Hebei University of Technology, Tianjin 300130, China; [email protected] (X.L.); [email protected] (S.H.) 2 Centre for Precision Technologies, University of Huddersfield, Huddersfield HD1 3DH, UK; [email protected] (F.G.); [email protected] (X.J.) * Correspondence: [email protected] or [email protected]; Tel.: +86-22-60204189

Academic Editor: Vittorio M. N. Passaro Received: 26 March 2017; Accepted: 3 May 2017; Published: 6 May 2017

Abstract: The of a varies for different of , and thus the same incident light with different wavelengths has different outgoing light. This characteristic of causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces . Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.

Keywords: chromatic aberration; closed sinusoidal fringe patterns; absolute phases; LCD (liquid crystal display) screen; systematic errors; deviation calibration

1. Introduction A camera is an indispensable part of optical measurement systems, and it is the key to realizing fast and noncontact measurements. In particular, color can simultaneously obtain the color texture and three-dimensional (3D) shape information of an object, which substantially improves the measurement speed. However, because of the optical characteristics of lenses, chromatic aberration (CA) exists in the captured images, which seriously affects the quality of the image and the accuracy of the measurement results. Therefore, to improve the measurement speed, and to obtain a precise color texture and the 3D data of an object’s morphology, the correction of the CA for each color channel has become an inevitable and urgent problem. There are two main approaches to CA elimination. One is hardware design, which usually uses costly fluoro-crown , abnormal flint glasses, or extra-low glasses [1]. Using a precise optical calculation, lens grinding, and lens assembly, a lens that focuses light of different at the same position is produced, enhancing the clarity and color fidelity of images. The other approach is software elimination, during which the camera captures images and processing is then used to correct the color differences.

Sensors 2017, 17, 1048; doi:10.3390/s17051048 www.mdpi.com/journal/sensors Sensors 2017, 17, 1048 2 of 16

Dollond invented two sets and two slices of concave and convex achromatic lenses in 1759, and Chevalier invented a set and two slices of concave and convex achromatic lenses in 1821 [2,3]. In 1968 and 1969, Japan’s Canon Inc. synthesized artificial fluorite (CaF2 calcium fluoride) and developed Ultra-Dispersion (UD) and Super UD , launching the Canon FL-F300 F5.6, FL-F500 F5.6, and mixed low dispersion lenses. In 1972, synthesized an extra-low dispersion lens with a lower CA than that of a UD lens, but this can absorb red light and the brightness is poor [4]. In 2015, a completely flat, ultra-thin lens was invented by the Harvard School of Engineering and Applied Sciences. The lens can different wavelengths of light at the same point and achieve instant color correction in one extremely thin, miniaturized device [5]. This technology is expected to be applied to optical elements in the future, but the time to market and its price are unknown. Although hardware design can correct CA to a certain extent, it cannot eliminate the color difference completely. In addition, this approach leads to a longer development cycle, higher cost, and heavier camera. Therefore, a simple, fast, and low cost method to effectively correct lens CA is increasingly becoming of interest. Zhang et al. and Sterk et al. used a calibration image with markers as a reference to calculate the differential displacement of reference points in different colors and then corrected the CA based on certain correction ratios [6,7]. However, their accuracy relies on the number of markers. Zhang et al. proposed a novel linear compensation method to compensate for longitudinal CA (LCA) resulting from the imaging lenses in a color fringe projection system [8]. The precision is improved to some extent, but it is only applicable to the optimum three-fringe number selection algorithm. Willson et al. designed an active lens control system to calibrate CA. The best focus distance and relative magnification coefficients of red, green, and blue channels are obtained. Axial CA (ACA) and LCA are reduced by adjusting the distance between the imaging plane and the lens based on the obtained parameters [9]. However, the system is complicated and it is difficult to ensure precision. Boult et al. used image warping to correct CA. First, the best focus distance and relative magnification coefficients are acquired using the active lens control system [9]. Then, these parameters are used in the image warping function to calibrate CA in the horizontal and vertical directions [10]. This method needs an external reference object as the standard of deformation: obtaining more feature points leads to better processing results. Kaufmann et al. established the relationship between the deviation of red, green, and blue channels caused by LCA and pixel position with the help of a triangular mesh, and then used least-squares fitting to effectively compensate for LCA [11]. The method’s precision is affected by the frequency of the triangular mesh. Mallon et al. calibrated LCA between different color channels using a high-density checkerboard [12], but did not achieve full-field calibration. Chung et al. used an image to eliminate color differences. Regardless of whether the color stripe is caused by axial or lateral CA, they regarded the green channel as the benchmark and first analyzed the behavior of the image edge without CA. Then, the initial and final pixel positions of the color differences between the green and red channels, as well as between the green and blue channels, are obtained. Finally, CA is determined and corrected using the region above the pixel of interest [13]. This method can eliminate obvious color differences in the image, but it is not good for regions with no obvious color differences. Chang et al. proposed a method of false color filtering to improve the image blurring and chromatic stripes produced by both ACA and LCA [14]. Although the method can correct ghosting caused by color differences, its process is complicated and some parameters must be set empirically. Therefore, the existing methods cannot completely remove CA in a color image. Huang et al. calibrated the error of a camera and projector caused by LCA by extracting the centers of different color circles that were projected onto a calibration board. This method can obtain the errors of limited positions, but still needs to acquire the other positions by interpolation [15]. Phase data methods based on fringe projection profilometry have been widely applied to the 3D shape measurement of an object’s surface because of the advantages of a full-field measurement, high accuracy, and high resolution. When fringe patterns are coded into the different major color channels of a DLP (digital light processing) projector and captured by a color CCD camera, the obtained absolute phase data have different values in each color channel because of CA. Hence, the SensorsSensors2017 2017, 17, 17, 1048, 1048 3 of3 of 16 16

phase data are related to CA and can be used to calibrate CA using full-field absolute phase maps. phase data are related to CA and can be used to calibrate CA using full-field absolute phase maps. Two common methods of calculating the wrapped phase data are multi-step phase-shifting [16] and Two common methods of calculating the wrapped phase data are multi-step phase-shifting [16] and transform-based algorithms [17]. Although a transform-based algorithm can extract the wrapped transform-based algorithms [17]. Although a transform-based algorithm can extract the wrapped phase phase from a single fringe pattern, it is time consuming and acquires less accurate phase data. from a single fringe pattern, it is time consuming and acquires less accurate phase data. Therefore, Therefore, the four-step phase-shifting algorithm is used to accurately calculate the wrapped phase the four-step phase-shifting algorithm is used to accurately calculate the wrapped phase data [18]. data [18]. To obtain the absolute phase map, many spatial and temporal phase unwrapping To obtain the absolute phase map, many spatial and temporal phase unwrapping algorithms have algorithms have been developed [19–26]. By comparing the absolute phase map in different color been developed [19–26]. By comparing the absolute phase map in different color channels pixel by channels pixel by pixel, full-field CA can be accurately determined. pixel, full-field CA can be accurately determined. This paper presents a novel method to calibrate and compensate for CA in color channels using This paper presents a novel method to calibrate and compensate for CA in color channels absolute phase maps. In contrast to the above correction methods, accurate full-field pixel using absolute phase maps. In contrast to the above correction methods, accurate full-field pixel correspondence relationships among the red, green, and blue channels can be determined by correspondence relationships among the red, green, and blue channels can be determined by comparing comparing the unwrapped phase data of the three color channels. In the rest of the paper, Section 2 the unwrapped phase data of the three color channels. In the rest of the paper, Section2 describes CA describes CA behavior, explains the principle of the proposed method, and analyzes the systematic behavior, explains the principle of the proposed method, and analyzes the systematic error. Section3 error. Section 3 shows the results obtained using simulated and experimental data. The conclusions shows the results obtained using simulated and experimental data. The conclusions and remarks and remarks regarding future work are given in Section 4. regarding future work are given in Section4. 2. Principle 2. Principle

2.1.2.1. Analysis Analysis of of CA CA CACA is is divided divided into into position position and and magnification CA. CA. In theIn the former, former, different different wavelengths wavelengths of light of light at theat same the same point pointon the on optical the axis optical are focusedaxis are at focused different at depths. different This depths. produces This circular produces defocused circular spotsdefocused and leads spots to imageand leads blurring. to image It can blurring. also be It called can also ACA. be Figure called1 ACA.a,b shows Figure the 1a focus,b show positionss the focus of thepositions red, green, of the and red, blue green, light without and blue and light with without ACA, respectively. and with ACA, Figure respectively.1c is the circular Figu defocusedre 1c is the spot.circular The latterdefocused type of spot. CA The has a latter refractive type index of CA that has varies a refractive for different index wavelengths that varies of for light different and leadswavelengths to different of light magnifications and leads to and different color stripes, magnification as showns and in Figurecolor stripe1e,f.s, This as shown type of in CA Fig isure also 1e,f. knownThis type as LCA of CA or radialis also CA.known Figure as 1LCAd shows or radial the imaging CA. Fig ofure red, 1d green,shows andthe imaging blue light of whenred, green, there isand noblue LCA. light The when process there of LCAis no calibrationLCA. The process is to correct of LCA Figure calibration1b to Figure is to1a, correct and Figure Figure1e to1b Figureto Figure1d, 1a, toand improve Figure image 1e to clarityFigure and1d, to resolution. improve image clarity and resolution.

lens imaging lens imaging plane object plane object

(a) (b) (c)

object object

imaging imaging lens plane lens plane

(d) (e) (f)

Figure 1. CA. (a) Imaging of red, green, and blue light without ACA; (b) ACA; (c) circular defocused spots;Figure (d) 1. imaging CA. (a) ofImaging red, green, of red, and green, blue lightand blue without light LCA; without (e) LCA;ACA; and (b) (ACA;f) color (c) stripes. circular defocused spots; (d) imaging of red, green, and blue light without LCA; (e) LCA; and (f) color stripes.

Sensors 2017, 17, 1048 4 of 16

2.2. Measurement and Calibration

2.2.1.Sensors Measurement 2017, 17, 1048 of CA 4 of 16

As2.2. describedMeasurement in and Section Calibration 2.1, ACA results in radially symmetrical distributed circular dispersion spots, and LCA results in radially distributed color stripes. Therefore, red, green, and blue closed sinusoidal2.2.1. Measurement fringe patterns of CA are generated to make the LCA distribution radially symmetrical, like ACA. The imagingAs described positions in of Section the color 2.1, fringes ACA results are different in radially because symmetrical of the CAdistributed of the color circular camera, dispersion so the CA amongspots, the threeand LCA channels results at in each radially point distributed can be computed color stripes. by comparing Therefore, thered, phase green, of and the blue three closed channels. Thesinusoidal specific fringe method patterns isshown are generated in Figure to 2 make. First, the red, LCA green, distribution and blue radially closed symmet sinusoidalrical, like fringe patternsACA. consistent The imaging with positions the four-step of the phasecolor fringes shifting are and different the optimum because of fringe the CA number of the color selection camera, method so the CA among the three channels at each point can be computed by comparing the phase of the are generated by software and sequentially displayed on an LCD (liquid crystal display). They are three channels. then captured by a CCD (charge coupled device) color camera and saved to a PC (personal computer). The specific method is shown in Figure 2. First, red, green, and blue closed sinusoidal fringe Second,patterns the four-step consistent phase-shifting with the four- algorithmstep phase is shift useding to and demodulate the optimum the wrappedfringe number phase selection of the three channels,method and are the generated optimum by software fringe number and sequentially selection displayed method ison used an LCD to calculate(liquid crystal their display). unwrapped phasesTheϕyR are(m ,thenn), ϕ capturedG(m, n) ,by and a CCDϕB( m(charge, n), where coupledm =device) 1, 2, 3,color ... M camera and n and= 1, saved 2, 3, to... a NPC are (personal the indices of thecomputer) in. theSecond, row the and four column-step phase directions,-shifting respectively, algorithm is used and Mto demodulate and N are the the sizewrapped of the phase captured image.of the Because three channels, of the influence and the ofoptimum CA, the fringe absolute number phase selection of each method pixel is position used to calculate in the three their color channelsunwrapped is not equal, phases except 휑R(푚, 푛 at),the 휑G( principal푚, 푛), and point 휑B(푚 of, 푛) the, where camera. m = 1, If the2, 3, blue... M channeland n = 1, is 2, considered 3, … N to be theare base, the theindices absolute of the phasepixels deviationin the row among and column the three directions color, channels respectively, can beand obtained M and N by are Equations the size of the captured image. Because of the influence of CA, the absolute phase of each pixel position (1) and (2). Finally, according to the absolute phase deviation of each color channel, the pixel deviation in the three color channels is not equal, except at the principal point of the camera. If the blue of eachchannel point is can considered be calculated. to be the base, the absolute phase deviation among the three color channels can be obtained by Equations (1) and (2). Finally, according to the absolute phase deviation of each ϕ ( ) = ( ) − ( ) color channel, the pixel deviation∆ RBof eachm, npoint ϕcanR mbe, calculated.n ϕB m , n (1) ∆φ (푚, 푛) = 휑 (푚, 푛) − 휑 (푚, 푛) (1) ∆ϕGB푅퐵(m, n) = ϕG푅 (m, n) − ϕ퐵 B(m, n) (2)

∆φ퐺퐵(푚, 푛) = 휑퐺(푚, 푛) − 휑퐵(푚, 푛) (2)

FigureFigure 2. 2.Flow Flow chartchart of CA CA measurement. measurement.

2.2.2. Calibrating CA 2.2.2. Calibrating CA Figure 3 shows the process of calibrating CA. Three absolute phase maps are obtained to build Figurepixel-to3- pixelshows correspondences the process of among calibrating the three CA. color Three channels. absolute First, phase the Cartesian maps are coordinates obtained toare build pixel-to-pixelconverted correspondencesto polar coordinates. among Second, the the three absolute color phase channels. at the First,same radius the Cartesian position coordinatesis extracted are convertedfrom the to polar three coordinates.color channels Second, and an the average absolute value phase is obtained. at the same Third, radius to avoid position an extrapolation is extracted from the threeerror, color the blue channels channel and is an regarded average as value the benchmark is obtained. and Third, the absolute to avoid phase an extrapolation 휑B_r at radius error,r is the extracted. Fourth, the absolute phases 휑 and 휑 of the red and green channels at the same blue channel is regarded as the benchmark andR_r the absoluteG_r phase ϕB_r at radius r is extracted. Fourth,

Sensors 2017, 17, 1048 5 of 16

the absolute phases ϕR_r and ϕG_r of the red and green channels at the same radius are extracted. Fifth, ϕR_r and ϕG_r are each compared to ϕB_r, and new radiuses rrb and rgb of ϕB_r in the red and green channels are computed through 1D interpolation if they are not equal. Otherwise, there is no CA at this point. Then, the original radius r is replaced with rrb and rgb. Finally, the polar coordinates are converted back to Cartesian coordinates, and the pixel deviations in the X and Y directions between the redSensors and blue2017, 17 channels,, 1048 as well as between the green and blue channels, caused by CA can be computed5 of 16 using Equations (3)–(6). 휑 휑 휑 rb gb radius are extracted. Fifth, R_r and G_r ∆arexRB each= xcomparedR − xB to B_r, and new radiuses r and r of(3) 휑B_r in the red and green channels are computed through 1D interpolation if they are not equal. = − Otherwise, there is no CA at this point. Then,∆yRB the originalyR y Bradius r is replaced with rrb and rgb. Finally,(4) the polar coordinates are converted back to Cartesian coordinates, and the pixel deviations in the X ∆xGB = xR − xB (5) and Y directions between the red and blue channels, as well as between the green and blue channels, caused by CA can be computed using Equation∆yGB s= (3)y–G(6)−. yB (6)

FigureFigure 3.3. FlowFlow chart of calibrating calibrating CA CA..

Here, xB and yB are the original coordinates ∆푥푅퐵 = 푥푅 of− the푥퐵 blue channel; xR and yR are the(3 actual) coordinates of the unwrapped phase of the red channel; x and y are the actual coordinates of the ∆푦푅퐵 = 푦푅 − 푦퐵 G G (4) absolute phase of the green channel; ∆xRB and ∆yRB are the pixel deviations in the horizontal and ∆푥퐺퐵 = 푥푅 − 푥퐵 (5) vertical directions, respectively, between the red and blue channels; and ∆xGB and ∆yGB are the pixel deviations in the horizontal and vertical directions,∆푦퐺퐵 = 푦 respectively,퐺 − 푦퐵 between the green and blue channels.(6)

Here, 푥퐵 and 푦퐵 are the original coordinates of the blue channel; 푥푅 and 푦푅 are the actual coordinates of the unwrapped phase of the red channel; 푥퐺 and 푦퐺 are the actual coordinates of the absolute phase of the green channel; ∆푥푅퐵 and ∆푦푅퐵 are the pixel deviations in the horizontal and vertical directions, respectively, between the red and blue channels; and ∆푥퐺퐵 and ∆푦퐺퐵 are the pixel deviations in the horizontal and vertical directions, respectively, between the green and blue channels.

Sensors 2017, 17, 1048 6 of 16

Accurate compensation for LCA among the three channels can be realized by moving the deviations of the sub-pixels ∆xRB and ∆yRB in the red channel, as well as ∆xGB and ∆yGB in the green channel, to the corresponding subpixel positions in the blue channel [27]. A 2D interpolation method is applied to the whole corrected image to accurately find the positions. Therefore, color information in the three channels coincides after full-field CA compensation.

2.3. Phase Demodulation Phase demodulation is an essential procedure in fringe projection profilometry and fringe reflection measurements. In this paper, because the fringe pattern on an LCD screen is used to calibrate the CA of a color CCD camera, a four-step phase-shifting algorithm [18] and an optimum three fringe number selection method [23,24] are chosen to demodulate the wrapped and absolute phases, respectively. Furthermore, each fringe pattern is captured six times to take the average value, in order to reduce the disturbance of noise.

2.3.1. Four-step Phase-Shifting Algorithm Phase shifting is a common method in fringe pattern processing, and the captured deformed fringes can be represented as follows:

I(x, y) = I0(x, y) + Im(x, y) cos ∅(x, y) + In(x, y) (7) where I(x, y) is the brightness of the captured pixel; I0(x, y) and Im(x, y) represent the background intensity and modulation depth, respectively; ∅(x, y) is the phase change created by the object surface; and In(x, y) is the random noise of the camera, which can be ignored in the actual calculation. To obtain ∅(x, y), researchers have proposed a variety of multi-step phase-shifting algorithms [16]. Because of the higher precision and fewer fringes, the four-step phase-shifting algorithm has been widely used in practical applications [19]. It can be represented as follows:

Ii(x, y) = I0(x, y) + Im(x, y) cos[∅(x, y) + αi] i = 1, 2, 3, 4 (8)

α1 = 0, α2 = pi/2, α3 = pi, α4 = 3pi/2 According to trigonometric function formulae, ∅(x, y) can be solved by Equation (9):   − I4(x, y) − I2(x, y) ∅(x, y) = tan 1 (9) I1(x, y) − I3(x, y) where ∅(x, y) ranges from π to π, it is necessary to expand it into the continuous phase.

2.3.2. Optimum Fringe Number Selection Method The optimum fringe number selection method is a kind of phase unwrapped method proposed by Towers et al. [23,24]. It determines the number of fringes and can be represented as follows:.

 (i−1)/(n−1) Nf i = Nf 0 − Nf 0 i = 1, . . . , n − 1 (10) where Nf 0 and Nf i are the maximum and ist number of fringes, respectively, and n is number of stripes used. When n is three, this method is called the optimum three-fringe selection method. For example, when Nf0 is 49 and n is equal to three, the other numbers are Nf 1 = Nf 0 − 1 = 48 and q Nf 2 = Nf 0 − Nf 0 = 42. Because a single fringe produced by a difference in the frequencies of Nf 0 and Nf i covers the entire field of view, the optimum three fringe selection method solves the problem of fringe order and has the greatest reliability. Sensors 2017, 17, 1048 7 of 16 Sensors 2017, 17, 1048 7 of 16 Sensors2.4. Systematic 2017, 17, 1 0Error48 7 of 16

2.4.2.4.1.2.4. Systematic Systematic Analysis Error Errorof Systematic Error

2.4.1.2.4.1.The Analysis Analysis LCD screenof of Systematic Systematic is an essential Error Error device in the calibration system, and it is mainly composed of a thin film transistor (TFT), upper and a lower polarizing plate, glass substrates, alignment films, The LCD screen is an essential device in the calibration system, and it is mainly composed of a liquidThe crystal, LCD screen RGB coloris an essential filters, and device a backlight in the calibration module. Thesystem, RGB and color it is filtersmainly are composed stuck to of the a thin film transistor (TFT), upper and a lower polarizing plate, glass substrates, alignment films, liquid thinupper film glass transistor substrate. (TFT The), upperthree R, and G, and a lower B color polarizing filters compose plate, glass a unit substrates, pixel of a alignmentn LCD, which films, is crystal, RGB color filters, and a backlight module. The RGB color filters are stuck to the upper glass liquidmainly crystal, used to RGB make color each filters, pixel display and a backlighta different module. grayscale The or RGBdifferent color images. filters areThere stuck are tomany the substrate. The three R, G, and B color filters compose a unit pixel of an LCD, which is mainly used to upperkinds ofglass color substrate. filter arrangements The three R, G, for and LCDs, B color and filters the common compose ones a unit are pixel arrangements of an LCD, of which stripes, is make each pixel display a different grayscale or different images. There are many kinds of color filter mainlytriangles, used mosaics, to make and each squares pixel [28], display as shown a different in Fig grayscaleure 4. Because or different different images. color filtersThere only are manyallow arrangements for LCDs, and the common ones are arrangements of stripes, triangles, mosaics, and kindsone color of color of light filter to arrangements pass through, for there LCDs, are and different the common position ones deviations are arrangements when displaying of stripes, red, squares [28], as shown in Figure4. Because different color filters only allow one color of light to pass triangles,green, and mosaics, blue fringes and squares using LCDs [28], aswith shown different in Fig colorure 4. filter Because arrangements. different color Therefore, filters onlysystematic allow through, there are different position deviations when displaying red, green, and blue fringes using oneerrors color can ofbe lightintroduced to pass by through, the LCD and there should are different be eliminated position before deviations correcting when the displayingCA. red, green,LCDs withand blue different fringes color using filter LCDs arrangements. with different Therefore, color filter systematic arrangements. errors can Therefore, be introduced systematic by the errorsLCD and can shouldbe introduced be eliminated by the beforeLCD and correcting should thebe eliminated CA. before correcting the CA.

(a) (b) (c) (d)

(a) (b) (c) (d) Figure 4. Kinds of color filter arrays in an LCD. (a) Strip distribution; (b) triangular distribution; (c) mosaicFigure 4.distribution;Kinds of colorand (d filter) square arrays distribution. in an LCD. (a) Strip distribution; (b) triangular distribution; Figure(c) mosaic 4. Kinds distribution; of color and filter (d arrays) square in distribution.an LCD. (a) Strip distribution; (b) triangular distribution; (c) mosaicCompared distribution; to the triangular and (d) square mosaic distribution. and square arrangements, the systematic errors introduced by the strip arrangement are directional and periodic, and there are hardly any systematic errors in ComparedCompared to the triangulartriangular mosaicmosaic and and square square arrangements, arrangements, the the systematic systematic errors errors introduced introduced by the strip vertical arrangement or horizontal are directional direction and for periodic, differen andt LCD theres, are such hardly as any the systematicLP097QX1 errors-SPAV in and the byLP097QX2 the strip-SPAV arrangement (LG). The are formerdirectional displays and periodic, very little and systematic there are hardly error in any the systematic vertical direction.errors in thevertical vertical or horizontal or horizontal direction direction for different for LCDs, differen sucht LCD as thes, LP097QX1-SPAV such as the LP097QX1 and LP097QX2-SPAV-SPAV and (LG).However, The formerthe latter displays has errors very in little the horizontal systematic direction. error in the The vertical LP097QX2 direction.-SPAV However, was chosen the in latter this LP097QX2system. As-SPAV shown (LG). in Fig Theure former 4a, red, displays green, very and blue little filterssystematic are tiled error in in the the horizontal vertical direction.direction; However,has errors the in the latter horizontal has errors direction. in the horizontal The LP097QX2-SPAV direction. The was LP097QX2 chosen in-SPAV this system. was chosen As shown in this in however,Figure4a, the red, same green, color and filters blue tile filters in the are vertical tiled in direction, the horizontal so the direction; systematic however, errors in the the same horizontal color system.direction As are shown larger in than Fig ure those 4a, in red, the green, vertical and direction. blue filters As are shown tiled in in Fi thegure horizontal 5, the red direction; filter is however,filters tile the in the same vertical color direction,filters tile soin the vertical systematic direction, errors inso thethe horizontalsystematicdirection errors in arethe largerhorizontal than regardedthose in the as the vertical base.direction. When vertical As shown sinusoidal in Figure fringe5 ,patterns the red filterwith different is regarded colors as theare base.displayed When on directionthe LCD, if are a 0 larger level fringe than thoseis captured in the by vertical a certain direction. pixel of Asthe showncamera, in then Fig urean exaggerated 5, the red filter−1 level is regardedvertical sinusoidal as the base fringe. When patterns vertical with sinusoidal different fringe colors patterns are displayed with different on the LCD, colors if aare 0 leveldisplayed fringe on is fringe for the green filter and −2 level fringe for the blue filter −are captured by the same pixel of the thecaptured LCD, if by a a0 certainlevel fringe pixel is of captured the camera, by thena certain an exaggerated pixel of the camera,1 level fringethen an for exaggerated the green filter −1 level and camera− . fringe2 level for fringe the green for the filter blue and filter −2 level are captured fringe for by the the blue same filter pixel are of captured the camera. by the same pixel of the camera.

Figure 5. Diagram of systematic errors introduced by the LCD.LCD.

2.4.2. System Error Verification VerificationFigure 5. Diagram andand EliminationElimination of systematic errors introduced by the LCD.

2.4.2.To System proveprove Error thethe correctnesscorrectness Verification ofof and the the aboveElimination above analysis, analysis, the the following following methods methods are are proposed, proposed, as as shown shown in inFigure Figure6. First, 6. First, the principal the principal point is point calibrated. is calibrated. Second, Second, red, green, red, and green, blue vertical and blue and vertical horizontal and horizontalfringeTo patterns prove fringe the consistent patternscorrectness with consistent of the the four-step above with theanalysis, phase-shifting four- stepthe followingphase algorithm-shifting methods and algorithm the are optimum proposed, and fringethe asoptimum numbershown infringeselection Fig urenumber method6. First, selection are the generated principal method by are point software generated is calibrated. and by are software sequentially Second, and red, are displayed green,sequentially and on the bluedisplayed LCD. vertical They on andthe are horizontal fringe patterns consistent with the four-step phase-shifting algorithm and the optimum fringe number selection method are generated by software and are sequentially displayed on the

Sensors 2017, 17, 1048 8 of 16 then capturedSensors 2017 by, 17, the1048 CCD color camera and saved to a personal computer. Third, the8 of four-step16 phase-shifting algorithm and the optimum fringe number selection method are used to calculate LCD. They are then captured by the CCD color camera and saved to a personal computer. Third, the the wrappedfour-step and phase unwrapped-shifting algorithm phases, and respectively. the optimum Fourth, fringe number the unwrapped selection method phases are of used the principalto point ofcalculate the vertical the wrapped and horizontal and unwrapped fringe phases, patterns respectively. of the threeFourth, channels the unwrapped are extracted, phases of the expressed as ϕrv_ppprincipal, ϕgv_pp point, ϕbv of_pp theand verticalϕrh_ pp and, ϕ horizontalgh_pp, ϕbh fringe_pp, respectively. patterns of the Finally, three channels the phases are extracted, are compared. expressed as 휑 , 휑 , 휑 and 휑 , 휑 , 휑 , respectively. Finally, the phases are If systematic error in푟푣_ the푝푝 horizontal푔푣_푝푝 푏푣_푝푝 direction푟ℎ_푝푝 has푔ℎ not_푝푝 been푏ℎ_푝푝 introduced by the LCD, ϕrv_pp, ϕgv_pp compared. If systematic error in the horizontal direction has not been introduced by the and ϕbv_pp are equal; otherwise, systematic error has been introduced. The same process is used to LCD, 휑 , 휑 and 휑 are equal; otherwise, systematic error has been introduced. The determine systematic푟푣_푝푝 푔푣_ error푝푝 in the푏푣_푝푝 vertical direction. same process is used to determine systematic error in the vertical direction.

Figure 6. Flow chart of systematic error measurement. Figure 6. Flow chart of systematic error measurement. The sequences of vertical and horizontal sinusoidal fringe are changed, and the phases at the Theprincipal sequences point of are vertical compared and and horizontal analyzed. sinusoidal If the phase fringe difference are changed, changes asand the the sequence phases at the principalchanges, point there are compared is systematic and error. analyzed. Otherwise, If theit does phase not exist. difference changes as the sequence changes, Phases 휑 , 휑 , and 휑 have corresponding points on the LCD, and their there is systematic error.푟푣_푝푝 Otherwise,푔푣_푝푝 it does푏푣_푝푝 not exist. coordinates can be determined through an inverse operation of Equation (8). The systematic errors Phasesin theϕ horizontalrv_pp, ϕgv _ LCDpp, and directionϕbv_pp arehave the differencecorresponding of their points coordinates. on the Similarly, LCD, and the their systematic coordinates can be determinederrors in the throughvertical direction an inverse can operationalso be obtained. of Equation Therefore, (8). the The systematic systematic errors errors introduced in the horizontalby LCD directionthe LCD arecan be the eliminated difference before oftheir the fringe coordinates. patterns are Similarly, generated. the systematic errors in the vertical direction can also be obtained. Therefore, the systematic errors introduced by the LCD can be 3. Experiments and Results eliminated before the fringe patterns are generated. To test the proposed method, an experimental system has been setup, as illustrated in Figure 7. The 3. Experimentssystem includes and Results a liquid crystal display (LCD) screen, a CCD color camera, and a PC. The computer is connected to the camera and the LCD screen by a gigabit Ethernet cable and HDMI (high definition To test the proposed method, an experimental system has been setup, as illustrated in Figure7.

The system includes a liquid crystal display (LCD) screen, a CCD color camera, and a PC. The computer is connected to the camera and the LCD screen by a gigabit Ethernet cable and HDMI (high definition multimedia interface), respectively. This setup generates the circular fringe patterns, and saves and Sensors 2017Sensors, 17 ,2017 1048, 17, 1048 9 of 16 9 of 16

multimedia interface), respectively. This setup generates the circular fringe patterns, and saves and processesprocesses the data the captureddata captured by theby the camera. camera. The The cameracamera is is used used to tocapture capture the images the images displayed displayed on the on the LCD screen,LCD screen, and the and LCDthe LCD screen screen is is used used to di displaysplay the the images images generated generated by the computer. by the computer.

FigureFigure 7. 7.Diagram Diagram ofof the calibration calibration system. system.

Before calibrating the LCA of the lens, the system needs to satisfy two conditions. One is that Beforethe LCD calibrating is parallel the to LCA the image of the plane, lens, the the system other is needsthat the to principal satisfy two point conditions. of the camera One is is in that the LCD isalignment parallel to with the the image center plane, of the the circular other sinusoidal is that the fringes principal. These point conditions of the can camera be satisfied is in alignment as with thefollows. center First, of the the circular intrinsic sinusoidalparameters of fringes. the CCD These camera conditions are calibrated can using be satisfied a checkerboard as follows. by First, the intrinsicusing parameters the Camera Calibrof theation CCD Toolbox camera for are Matlab calibrated [29]. Second, using a a checkerboardpicture of the checkerboard by using the is Camera generated by software, displayed on the LCD screen, and captured by the CCD camera. The size of Calibration Toolbox for Matlab [29]. Second, a picture of the checkerboard is generated by software, the checkerboard can be obtained because the unit pixel size of the LCD is known. The external displayedparameters on the LCD(3D position screen, of and the capturedcheckerboard by in the the CCD camera camera. reference The frame size, ofi.e., the R and checkerboard T matrices) can be obtainedand because the angle the between unit pixel the LCD size and of theimage LCD plane is known.of the camera The externalin the X, Y, parameters and Z directions (3D can position also of the checkerboardbe compute in thed. They camera provide reference the basis frame, for the i.e.,parallel R and adjustment T matrices) by using and a thethree angle-axis rotary between table the. LCD and imageMoreover, plane t ofhe the angle camera 휃 between in the theX normal, Y, and vectorZ directions of the image can also plane be and computed. LCD can They be used provide to the basis forevaluate the parallel the parallelism adjustment of adjustment by using, which a three-axis can be obtained rotary using table. the Moreover,following equation the angle: θ between 푉′ ∙ 푉′ the normal vector of the image plane and LCD−1 can푙푐푑 be𝑖푚푎푔푒 used_푝푙푎푛푒 to evaluate the parallelism of adjustment, θ = cos (11) | ′ | ′ which can be obtained using the following equation:푉푙푐푑 ∗ |푉𝑖푚푎푔푒_푝푙푎푛푒| ′ ′ where 푉푙푐푑 is the normal vector of the LCD and 푉𝑖푚푎푔푒_푝푙푎푛푒 is the normal vector of the image plane 0 · 0 of the camera. Vlcd Vimage_plane θ = cos−1 (11) Finally, blue orthogonal fringes are displayed 0 on the0 LCD and captured by the CCD camera to V ∗ V acquire the position relationship so that the principallcd pointimage _ofplane the camera corresponds to the row and column coordinates on the display screen. It can be achieved according to the procedure of 0 0 where VFiglcdureis the6 in normal Section vector 2.4. Afterof the obtaining LCD and theVimage unwrapped_plane is the phase normal of the vector principal of the point, image its plane of the camera.corresponding row and column coordinates on the LCD can also be computed through the inverse Finally,operation blue of orthogonal Equation (8) fringes. Then, the are corresponding displayed on row the and LCD column and coordinates capturedby are the taken CCD as the camera to acquirecenter the position to generate relationship circle fringe sos and that are the displayed principal on pointLCD. Therefore of the camera, the two corresponds conditions above to the are row and both satisfied. The proposed CA calibration method was tested using simulated data first and then column coordinates on the display screen. It can be achieved according to the procedure of Figure6 in actual experimental data. Section 2.4. After obtaining the unwrapped phase of the principal point, its corresponding row and column3.1. coordinates Simulation on the LCD can also be computed through the inverse operation of Equation (8). Then, the correspondingTwelve closed row circular and sinusoidal column coordinates fringes patterns are taken with a as resolution the center of to 768 generate × 1024 circle were fringes and aregenerated displayed and on modulated LCD. Therefore, into the red, the green, two conditions and blue channels above of are the both LCD satisfied. screen. The The sequence proposed CA calibrationwas method32, 31.5, and was 28, tested and the using phase simulated shift step data was firstπ⁄2. andWrapped then and actual unwrapped experimental phases data.can be precisely computed using the four-step phase-shifting algorithm and the optimum three-fringe 3.1. Simulationnumber method. Moreover, it is obviously known that 32, 31.5, and 28 are not the optimum three-fringe numbers, but because the fringes are circular, the unwrapped phase can be obtained Twelveusing closedthe optimum circular three sinusoidal-fringe numbers fringes 64, patterns63, and 56 with in the a simulation resolution [24 of]. 768The average× 1024 intensity, were generated and modulatedfringe contrast, into theand red,intensity green, noise and of the blue fringes channels generated of the by LCDthe computer screen. are The 128, sequence 100, and was2.5%, 32, 31.5, and 28,respectively, and the phase and shift the principal step was pointπ/2 of. Wrapped the camera and is at unwrapped (384, 512). To phases obtain fringes can be with precisely different computed using the four-step phase-shifting algorithm and the optimum three-fringe number method. Moreover, it is obviously known that 32, 31.5, and 28 are not the optimum three-fringe numbers, but because the fringes are circular, the unwrapped phase can be obtained using the optimum three-fringe numbers 64, 63, and 56 in the simulation [24]. The average intensity, fringe contrast, and intensity noise of the fringes generated by the computer are 128, 100, and 2.5%, respectively, and the principal point of the camera is at (384, 512). To obtain fringes with different magnifications, the phase per pixel in the LCD screen of the red, green, and blue channels are 0.1971, 0.1963, and 0.1952, respectively. Sensors 2017, 17, 1048 10 of 16 , the phase per pixel in the LCD screen of the red, green, and blue channels are 0.1971, 0.1963,Sensors 2017 and, 17 0.1952,, 1048 respectively. 10 of 16

FigureSensors 8a shows2017, 17, 1 0the48 original composite fringe pattern image, where the color stripes10 of 16 are clearly far away from the principal point. The original unwrapped phases of the red, green, and blue Figuremagnification8a shows thes, the original phase per composite pixel in the LCD fringe screen pattern of the red, image, green, where and blue the channels color are stripes 0.1971, are clearly channels are different, as shown in Figure 9a,d. Figure 9b,e shows the pixel deviation maps caused far away0.1963, from theand 0.1952, principal respectively. point. The original unwrapped phases of the red, green, and blue by the CA of the lens between red and blue channels and between the green and blue channels, channels are different,Figure 8a shows as shown the original in Figure composite9a,d. fringe Figure pattern9b,e image, shows where the the pixel color deviationstripes are clearly maps caused respectively.far away The se from results the principal verify point. the Theeffectiveness original unwrapped of the phases proposed of the method red, green, because and blue the phase by the CA of the lens between red and blue channels and between the green and blue channels, deviationschannels are decreased are different,, as as shown shown inin Fig Figureure 9a ,d9c. Figure,f, and 9b ,thee show colors the stripes pixel deviation are greatly maps caused eliminated , as respectively.by Thesethe CA resultsof the lens verify between the effectivenessred and blue channels of theproposed and between method the green because and blue the channels, phase deviations shown in Figure 8b. are decreased,respectively. as shown These in results Figure verify9c,f, the and effectiveness the color of stripes the proposed are greatly method eliminated, because the phase as shown in Figure8b. deviations are decreased, as shown in Figure 9c,f, and the color stripes are greatly eliminated, as shown in Figure 8b.

(a) (a) (b) (b)

Figure 8. SimulationFigure 8. Simulation images. images. (a) Original (a) Original composite composite fringefringe pattern pattern image image and ( band) composite (b) composite fringe fringe Figure 8. Simulationpattern image images.after CA compensation. (a) Original composite fringe pattern image and (b) composite fringe pattern image after CA compensation.

(a) (b) (c)

(a) (b) (c)

(d) (e) (f)

Figure 9. Simulation results. (a) Original phase deviations; (b) original pixel deviations; (c) phase deviations after CA compensation between the red and blue channels; (d) and (e) are the phase deviation and pixel deviation between the green and blue channels, and (f) is the phase deviation after CA compensation. (d) (e) (f) 3.2. Experiment Results for CA Compensation Figure 9. Simulation results. (a) Original phase deviations; (b) original pixel deviations; (c) phase Figure 9. Simulation results. (a) Original phase deviations; (b) original pixel deviations; (c) phase deviationsFigure after CA 10 shows compensation the experimental between system. the red This and system blue channels; mainly consists (d) and of (e off) - arethe - theshelf phase deviationscomponents: afterCA an SVCam compensation-ECO655 color between camer thea with red aand 2050 blue× 2448 channels; resolution (sd and) and 3.45 (e )× are3.45 theums phase deviation and pixel deviation between the green and blue channels, and (f) is the phase deviation deviationpixel and pitch, pixel a CCTV deviation (closed between circuit television) the green andzoom blue lens channels, with a focus and length (f) is theof 6 phase–12 mm deviation and an after afterCA compensation. CAadjustable compensation. , and an LP097QX2 TFT-LCD display (LG) with a physical resolution of 1536 × 2048 and pixel pitch of 0.096 × 0.096 mm. Its color filter is distributed as a strip. It will have the 3.2. Experimentphenomenon Results of for a moire CA Compensation fringe when the camera directly looks at the LCD screen. In order to solve 3.2. Experimentthis problem, Results a holographic for CA Compensation projection film was attached to the LCD screen surface. Figure 10 shows the experimental system. This system mainly consists of off-the-shelf Figure 10 shows the experimental system. This system mainly consists of off-the-shelf components: components: an SVCam-ECO655 color camera with a 2050 × 2448 resolutions and 3.45 × 3.45 ums an SVCam-ECO655 color camera with a 2050 × 2448 resolutions and 3.45 × 3.45 ums pixel pitch, pixel pitch, a CCTV (closed circuit television) with a focus length of 6–12 mm and an a CCTV (closed circuit television) zoom lens with a focus length of 6–12 mm and an adjustable aperture, adjustable aperture, and an LP097QX2 TFT-LCD display (LG) with a physical resolution of 1536 × and an LP097QX2 TFT-LCD display (LG) with a physical resolution of 1536 × 2048 and pixel pitch 2048 and pixel pitch of 0.096 × 0.096 mm. Its color filter is distributed as a strip. It will have the of 0.096 × 0.096 mm. Its color filter is distributed as a strip. It will have the phenomenon of a moire phenomenon of a moire fringe when the camera directly looks at the LCD screen. In order to solve fringe when the camera directly looks at the LCD screen. In order to solve this problem, a holographic this problem, a holographic projection film was attached to the LCD screen surface. projection film was attached to the LCD screen surface.

Sensors 2017, 17, 1048 11 of 16 Sensors 2017, 17, 1048 11 of 16 Sensors 2017, 17, 1048 11 of 16

Sensors 2017, 17, 1048 11 of 16

Figure 10. Experiment system . Figure 10. Experiment system. Figure 10. Experiment system. The normal vector of the LCDFigure display 10. Experiment plane in thesystem camera. reference frame is (−0.009510, The normal vector of the LCD display plane in the camera reference frame is (−0.009510, −0.005589,The normal −0.999939), vector so of the the angle LCD between display the plane camera in target the cameraand LCD reference display is frame 0.6320°. is Figure (−0.009510, 11 − − ◦ 0.005589,−0.005589,showsThe the −0.999939),0.999939), normal wrapped vector soso and thethe of unwrapped angle the LCD between display phase the the maps plane camera camera from in target the target the camera and captured and LCD LCDreference fringedisplay display frame patterns is is0.6320°. 0.6320 is (−0.009510, in theFigure. Figure red 11 11 channel of the color camera, where its distribution is circular. Figure 12 shows the fringe patterns in showsshows−0.005589, the the wrapped wrapped −0.999939), and and unwrappedso the unwrapped angle between phase phase maps the maps camera from from the target captured the and captured LCD fringe display fringe patterns is patterns0.6320°. in the Figure in red the channel11 red showsthe 90° thedirection wrapped of the and captured unwrapped image, phase where maps Figure from 12a theis the captured original fringe fringe patternspattern affected in the red by ◦ of thechannel color of camera, the color where camera, its where distribution its distribu is circular.tion is circular. Figure 12Figure shows 12 theshows fringe the fringe patterns patterns in the in 90 channelCA, and of Fig theure color 12b camera,,c is the where enlarged its distribu image tion and is brightness circular. Figure curve 12 of shows the red the area fringe in Figpatternsure 12a, in directionthe 90° ofdirection the captured of the captured image, where image, Figure where 12 Figa isure the 12a original is the original fringe pattern fringe pattern affected affected by CA, by and therespectively. 90° direction Correspondingly, of the captured Fig image,ure 12d where–f are the Fig imagesure 12a after is the correction. original Itfringe can be pattern seen in affected Figure 12eby FigureCA, 12 andb,c Fig isure the 12b enlarged,c is the image enlarged and imagebrightness and brightnesscurve of the curve red areaof the in red Figure area 12 ina, Fig respectively.ure 12a, CA,that andthe purple Figure stripes12b,c is in the Fig enlargedure 12b are image reduced, and brightness and the intensity curve of curve the red of area the three in Fig channelsure 12a, Correspondingly,respectively. Correspondingly, Figure 12d–f areFigure the 12d images–f are afterthe images correction. after correction. It can be It seen can inbe Figureseen in Fig12eure that 12e the respectively.coincide after Correspondingly, compensation using Figure the 12d proposed–f are the method images. afterFigure correction. 13 shows It canthe beoriginal seen in unwrapped Figure 12e purplethatthatphase stripesthe the deviations purple purple in Figure stripes stripes caused 12 in b inby are Fig Fig CAure reduced,ure and 12b12b phase are are and deviations reduced, the intensity and andafter the theCA curve intensity intensitycorrection of the curve curvebetween three of ofchannels the the the three red three and coincide channels channelsgreen after compensationcoincidecoincidechannels, after after andusing compensation compensation between the proposed the using blueusing method. and thethe greenproposedproposed Figure channels methodmethod13 shows in . . theFigure Figure the 90° original 13 direction13 shows shows unwrapped the of the theoriginal original captured phase unwrapped unwrapped image, deviations causedphasephaserespectively. by deviations deviations CA and It caused phaseis caused know deviationsby nby CA thatCA and and the phase afterphase unwrapped CAdeviations correction phase afterafter deviation betweenCA CA correction correction of the the redbetween threebetween and channels the green the red red and channels, is and greatly green green and betweenchannels,channels,reduced the andblue after and between andcorrection. between green the the Figure channels blue blue and and14 showsin green green the the90 channels◦ originaldirection in in closed the the of 90° the90° circular direction captureddirection sinusoidal of ofimage, the the captured fringe captured respectively. patterns image, image, It is knownrespectively.respectively.affected that theby CA, Itunwrapped It is and is know know (b) nisn phasethe that that enlarged the the deviation unwrapped unwrapped image of of the red phase phase three area deviation deviationin channels (a). After of isof compensation, the greatly the three three reduced channels channels the color after is isgreatlystripes correction. greatly Figurereducedreducedare 14 not shows afterobvious, after correction. the correction. as original illustrated Figure Figure closed in (c) 14 14 circular and shows shows (d). sinusoidalFigure the original original 15 demonstrates fringe closed closed patterns circular circular the unwrapped affected sinusoidal sinusoidal by phase fringe CA, fringe deviation and patterns patterns (b) is the enlargedaffectedaffectedbefore image byand by CA, CA,after of and red and CA area(b) (b) compensation is is in the the (a). enlarged enlarged After for compensation, imageimage the red, of redredgreen, areaarea the and in colorin (a). blue(a). After stripes After channels. compensation, compensation, are notIt is obvious,clear th that eth colore as colorthe illustrated stripes phase stripes in deviations between the red and blue channels, and between the green and blue channels, are (c)are andare not (d). not obvious, Figureobvious, 15as as demonstratesillustrated illustrated in in (c) (c) the andand unwrapped (d).(d). Figure phase 1515 demonstratesdemonstrates deviation the before the unwrapped unwrapped and after phase CAphase compensationdeviation deviation beforebeforereduced and and after after after CA CA CA correction. compensation compensation forfor thethe red, green, andand blue blue channels. channels. It Itis isclear clear that that the the phase phase for the red, green, and blue channels. It is clear that the phase deviations between the red and blue deviationsdeviations between between the the red red and and blue blue channels,, andand between between the the green green and and blue blue channels channels, are, are channels, and between the green and blue channels, are reduced after CA correction. reducedreduced after after CA CA correction. correction.

(a) (b)

Figure 11. Phase maps of the red channel. (a) Wrapped phase map and (b) unwrapped phase map. (a) (b) (a) (b) Figure 11. Phase maps of the red channel. (a) Wrapped phase map and (b) unwrapped phase map. Figure 11. Phase maps of the red channel. (a) Wrapped phase map and (b) unwrapped phase map. Figure 11. Phase maps of the red channel. (a) Wrapped phase map and (b) unwrapped phase map.

(a) (b) (c)

(a) (b) (c)

(a) (b) (c)

Figure 12. Cont.

SensorsSensors 20172017,, 1717,, 1 1048048 1212 of of 16 16 Sensors 2017, 17, 1048 12 of 16 Sensors 2017, 17, 1048 12 of 16

(d) (e) (f) (d()d ) (e) (f)( f) Figure 12. Fringe patterns at 90°. (a) Original fringe patterns affected by CA; (b) enlarged image of FigureFigure 12. 12. Fringe Fringe patterns patterns at at 90°. 90°. ( a(a)) Original Original fringefringe patterns affectedaffected by by CA; CA; ( b()b enlarged) enlarged image image of of theFigure red 12. areaFringe in (a patterns); (c) intensity at 90◦.( curvea) Original of the fringe three patterns channels affected of (b by); ( CA;d) fringe (b) enlarged patterns image after of CA the thethe red red area area in in ( a ();a ); ( c ()c ) intensity intensity curve curve of of thethe threethree channels channels of of (b (b);); (d (d) ) fringe fringe patterns patterns after after CA CA compensation;red area in (a); (ce)) intensityenlarged curveimage of of the the three red channelsarea in (d of); (andb); ( d(f)) fringeintensity patterns curve after of the CA three compensation; channels compensation;compensation; (e ()e enlarged) enlarged image image of of the the redred areaarea in (d);); andand ((ff)) intensity intensity curve curve of of the the three three channels channels of(e )(e enlarged). image of the red area in (d); and (f) intensity curve of the three channels of (e). of of(e ).(e ).

(a()a ) ((bb)) (a) (b) FigureFigure 13. 13.Unwrapped Unwrapped phase phase differences differences at at 90°. 90°.◦ ( (aa)) Original Original phase differences differences among among the the three three Figure 13. Unwrapped phase differences at 9090°..( (aa)) Original Original phase phase differences differences among among the three channelschannels and and (b )( bphase) phase differences differences after after CA CA correction. correction. channels and (b) phase differences afterafter CACA correction.correction.

(a) (b) (a) (b) (a) (b)

(c) (d) Figure 14. Closed circle sinusoidal(c) fringe patterns. (a ) The original fringe(d) patterns affected by CA; (b) (c) (d) Figureenlarged 14. Closed image circle of the sinusoidal red area in fringe (a); ( cpatterns.) fringe patterns(a) The originalafter CA fringe compensation; patterns affectedand (d) byenlarged CA; (b ) Figureimage 14. of Closed the red circle area in sinusoidal (c). fringe patterns. (a) The original fringe patterns affected by CA; (b) enlargedFigure 14. imageClosed of the circle red sinusoidal area in (a fringe); (c) fringe patterns. patterns (a) The after original CA compensation; fringe patterns and affected (d) enlarged by CA; enlarged image of the red area in (a); (c) fringe patterns after CA compensation; and (d) enlarged image(b) enlarged of the red image area of in the (c). red area in (a); (c) fringe patterns after CA compensation; and (d) enlarged image of the red area inin ((c).).

Sensors 2017, 17, 1048 13 of 16 Sensors 2017, 17, 1048 13 of 16

(a) (b)

(c) (d)

Figure 15. Unwrapped phase deviations for the red, green, and blue channels. (a) Original phase Figure 15. Unwrapped phase deviations for the red, green, and blue channels. (a) Original phase deviation affected by the CA between the red and blue channels; (b) phase deviation after CA compensation for for the the red red and and blue blue channels; channels; (c) ( coriginal) original phase phase devi deviationation caused caused by CA by CAbetween between the greenthe green and and blue blue channels; channels; and and(d) phase (d) phase deviation deviation after after CA compensation CA compensation for the for green the green and blue and channels.blue channels.

When qualitatively compared to the CA correction methods based on identification points in When qualitatively compared to the CA correction methods based on identification points in Refs. [12,15], the proposed method built the full-field corresponding relationship of the pixel by Refs. [12,15], the proposed method built the full-field corresponding relationship of the pixel by pixel pixel deviation caused by CA among the three color channels. However, the methods in Refs. deviation caused by CA among the three color channels. However, the methods in Refs. [12,15] can [12,15] can only obtain the CA at discrete points, and the accuracy depends on the density of the only obtain the CA at discrete points, and the accuracy depends on the density of the checkerboard checkerboard pattern and circle. In order to qualitatively evaluate the performance, the method in pattern and circle. In order to qualitatively evaluate the performance, the method in Ref. [13] was Ref. [13] was applied to the captured closed circle sinusoidal fringe patterns. The PSNR (peak applied to the captured closed circle sinusoidal fringe patterns. The PSNR (peak signal-to-noise ratio) signal-to-noise ratio) of the image was calculated after CA correction using both methods, as shown of the image was calculated after CA correction using both methods, as shown in Table1. The PSNR of in Table 1. The PSNR of the proposed method is larger than the method in Ref. [13]. Therefore, the the proposed method is larger than the method in Ref. [13]. Therefore, the proposed method gives proposed method gives better results than that in Ref. [13]. better results than that in Ref. [13].

Table 1. PSNR comparison of the c closedlosed circle sinusoidal fringe patterns after CA correction by using the method in this paper and Ref.Ref. [ 13]].. Proposed Method Method in Ref. [13] Proposed Method Method in Ref. [13] PSNR 36.1543 34.6130 PSNR 36.1543 34.6130 3.3. Systematic Error Analysis 3.3. Systematic Error Analysis Table 2 shows the phase deviations of the principal point in the horizontal and vertical directionsTable 2 among shows the phase red, green, deviations and blue of the channels principal of point fringes in the for horizontal the same and sequence vertical at directions different positionsamong the on red, the green,LCD. It and show blues that channels the phase of fringes deviation for the between same sequencethe red and at differentgreen channels positions is less on thanthe LCD. 0.15, It and shows the thatphase the deviation phase deviation between between the green the red and and blue green channels channels is larger is less than than this.0.15, Moreover,and the phase phase deviation deviations between in the thevertical green direction and blue are channels far smaller is larger than thanin the this. horizontal Moreover, direction, phase verifyingdeviations the in the above vertical analysis. direction Table are 3 far shows smaller the than pixel in the deviations horizontal in direction, the horizontal verifying and the vertical above directionsanalysis. Table among3 shows the red, the green, pixel deviationsand blue filters in the of horizontal the LCD. and The vertical pixel deviation directions in among the horizontal the red, directiongreen, and between blue filters the red of the and LCD. green The filters pixel is deviation near 0.32, in and the it horizontal is 0.44 between direction the betweengreen and the blue red filters. Moreover, the pixel deviation in the vertical direction is very small, with a value of about

Sensors 2017, 17, 1048 14 of 16

andSensors green 2017, filters17, 1048 is near 0.32, and it is 0.44 between the green and blue filters. Moreover, the14 pixel of 16 deviation in the vertical direction is very small, with a value of about 0.05. Table4 shows the phase deviations0.05. Table of4 shows the principal the phase point deviations in the horizontal of the principal direction point among in the the horizontal red, green, direction and blue among channels the forred, different green, and sequences blue channels and the for same different LCD position.sequences It and shows the thatsame as LCD the fringeposition. sequence It shows increases, that as thethe phasefringe deviationsequence amongincreases, the the three phase channels deviation increases. among The the phase three data channels can also incr beeases. converted The phase to a pixeldata deviationcan also be among converted the three to a colorpixel filtersdeviation of the among LCD. Figurethe three 16 showscolor filters the phase of the deviations LCD. Figure caused 16 byshows systematic the phase errors deviations and the caused CA at theby systematic middle row. errors Figure and 16 thea showsCA at thethe originalmiddle row. deviations Figure and16a Figureshows 16 theb shows original the deviations deviations and after Figure compensating 16b shows for the the systematic deviations errors after introduced compensating by the for LCD. the Thesesystematic results errors show introduced the validity by ofthe the LCD. analysis These in results Section show 2.2. the validity of the analysis in Section 2.2.

(a) (b)

FigureFigure 16. 16.Phase Phase deviations deviations in inthe the middle middle row row between between the red the and red green and channels, green channels as well, asas between well as thebetween green andthe green blue channels. and blue ( achannels.) Original ( phasea) Original deviations phase and deviations (b) phase and deviations (b) phase after deviations compensation. after compensation. Table 2. Phase deviation of principal point in horizontal and vertical directions between red, green, andTable blue 2. Phase channels deviation of fringes of principal with thepoint same in horizontal sequence and at differentvertical directions positions between of LCD, red, respectively. green, and (Units:blue channels phase) of fringes with the same sequence at different positions of LCD, respectively (Units: phase) Horizontal Red and Green and Vertical Red and Green and Position HorizontalGreenRed and GreenBlue and Position Vertical GreenRed and GreenBlue and Position Green Blue Position Green Blue 1 0.1440 0.1852 1 −0.0047 0.0104 1 0.1440 0.1852 1 −0.0047 0.0104 2 20.1437 0.1437 0.1983 0.1983 2 2 −0.0155−0.0155 0.02410.0241 3 30.1379 0.1379 0.2086 0.2086 3 3 −0.0176−0.0176 0.00810.0081 4 40.1351 0.1351 0.1676 0.1676 4 4 −0.0233−0.0233 3.95883.9588× ×10 10−4−4 5 50.1465 0.1465 0.1847 0.1847 5 5 −0.0155−0.0155 0.03240.0324

TableTable 3.3.Pixel Pixel deviationdeviation in in horizontal horizontal and and vertical vertical directions directions among among red, red, green, green and, and blue blue color color filters filters of theof the LCD. LCD. (Units: (Units: pixel) pixel)

HorizontalHorizontal Red andRed and GreenGreen and and Vertical VerticalRedRed and and GreenGreen and and PositionPosition GreenGreen BlueBlue PositionPosition GreenGreen BlueBlue 1 10.3260 0.3260 0.4191 0.4191 1 1 −0.0141−0.0141 0.02550.0255 2 20.3253 0.3253 0.4488 0.4488 2 2 −0.0378−0.0378 0.05890.0589 3 30.3122 0.3122 0.4723 0.4723 3 3 −0.0431−0.0431 0.01980.0198 − × −4 4 40.3058 0.3058 0.3793 0.3793 4 4 −0.05700.0570 9.67799.6779 ×10 10−4 5 0.3315 0.4181 5 −0.0378 0.0791 5 0.3315 0.4181 5 −0.0378 0.0791

Table 4. Phase deviations of the principal point in the horizontal direction between red, green, and blue Table 4. Phase deviations of the principal point in the horizontal direction between red, green, and channels under the premise of different sequences and the same position of the LCD. (Units: phase) blue channels under the premise of different sequences and the same position of the LCD. (Units: phase) Vertical Fringes Red and Green Green and Blue Vertical Fringes[64 63 56]Red 0.0729 and Green 0.0545 Green and Blue [64 63 56[100] 99 90] 0.10560.0729 0.1121 0.0545 [100 99 90[121] 120 110] 0.12000.1056 0.1399 0.1121 [144 143 132] 0.1629 0.1528 [121 120 110[256] 255 240] 0.27010.1200 0.2511 0.1399 [144 143 132] 0.1629 0.1528 [256 255 240] 0.2701 0.2511

Sensors 2017, 17, 1048 15 of 16

4. Conclusions This paper presented a novel method for full-field calibration and compensation for CA among the red, green, and blue channels of a color camera based on absolute phase maps. The radial correspondence between the three channels is obtained using phase data calculated from closed circular sinusoidal fringe patterns in polar coordinates, and pixel-to-pixel correspondences are acquired in Cartesian coordinates. CA is compensated for in the vertical and horizontal directions with sub-pixel accuracy. Furthermore, the systematic error introduced by the red, green, and blue color filters of the LCD is analyzed and eliminated. Finally, experimental results showed the effectiveness of the proposed method. Compared to the existing CA correction methods based on discrete identification points, the proposed method can ascertain the full-field pixel deviations caused by CA. Moreover, the PSNR of the proposed method is larger, so it gives better results. Because the CA varies with the distance from the tested object to the camera, the CA of several different depths will be calibrated and used to obtain the CA of the three channels of each depth through interpolation. Therefore, the relation between the CA and the distance from the tested object to the camera should be determined in future work. It can then be used to correct the effect of CA when different shapes of objects are measured. The proposed calibration method can accurately and effectively determine the axial and radial CA for each pixel in a captured image. Using the calibrated results, one can completely eliminate CA displayed by the color images captured by color cameras. Therefore, compared to the existing methods, the proposed method has the following two advantages: (1) High resolution. Since the full-field images are used to calculate every pixel’s deviation between color channels, the obtained CA has a high resolution; and (2) High accuracy. The obtained CA is produced from a continuous phase map, so it has a high accuracy.

Acknowledgments: The authors would like to thank the National Natural Science Foundation of China (under grant 51675160, 61171048), Key Basic Research Project of Applied Basic Research Programs Supported by Hebei Province (under grant 15961701D), Research Project for High-level Talents in Hebei University (under grant GCC2014049), Talents Project Training Funds in Hebei Province (NO. A201500503), Tianjin Science and Technology Project (under grant 15PTSYJC00260). This project is also funded by European Horizon 2020 through Marie Sklodowska-Curie Individual Fellowship Scheme (under grant 7067466-3DRM). Author Contributions: Xiaohong Liu and Shujun Huang performed the simulations, experiments, and analyzed the data under the guidance of Zonghua Zhang, Feng Gao, and Xiangqian Jiang. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Apochromatic (APO) lens in binoculars. Available online: http://www.bestbinocularsreviews.com/blog/ apochromatic-apo-lenses-in-binoculars-03/ (accessed on 12 Feburary 2017). 2. Dollond, J. Account of some experiments concerning the different re-frangibility of light. Philos. Trans. R. Soc. 1759, 50, 733–743. [CrossRef] 3. Shen, M. Camera collection and appreciation. Camera 1995, 6, 40–42. 4. ED lens. Available online: http://baike.sogou.com/v54981934.html (accessed on 19 February 2017). 5. Harvard School of Engineering and Applied Sciences. Perfect colors, captured with one ultra-thin lens. Available online: http://www.seas.harvard.edu/news/2015/02/perfect-colors-captured-with-one-ultra- thin-lens (accessed on 19 March 2017). 6. Zhang, R.; Zhou, M.; Jian, X. Compensation method of image chromatic aberration. Patent CN1612028A, 4 May 2005. 7. Sterk, P.; Mu, L.; Driem, A. Method and device for dealing with chromatic aberration and purple stripes. Patent N103209330 A, 17 July 2013. 8. Zhang, Z.; Towers, C.; Towers, D. Compensating lateral chromatic aberration of a colour fringe projection system for shape metrology. Opt. Lasers Eng. 2010, 48, 159–165. [CrossRef] 9. Willson, R.; Shafer, S. Active lens control for high precision computer imaging. IEEE Trans. Rob. Autom. 1991, 3, 2063–2070. Sensors 2017, 17, 1048 16 of 16

10. Boult, T.; George, W. Correcting chromatic aberrations using image warping. In Proceedings of the 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Champaign, IL, USA, 15–18 June 1992; pp. 684–687. 11. Kaufmann, V.; Ladstadterr, R. Elimination of color fringes in digital caused by lateral chromatic aberration. In Proceedings of the CIPA 2005 XX International Symposium, Torino, Italy, 26 September– 1 October 2005. 12. Mallon, J.; Whelan, P. Calibration and removal of lateral chromatic aberration in images. Pattern Recogn. Lett. 2007, 28, 125–135. [CrossRef] 13. Chung, S.; Kim, B.; Song, W. Removing chromatic aberration by . Opt. Eng. 2010, 49, 067002–067010. [CrossRef] 14. Chang, J.; Kang, H.; Kang, G. Correction of axial and lateral chromatic aberration with false color filtering. IEEE Trans. Image Process. 2013, 22, 1186–1198. [CrossRef][PubMed] 15. Huang, J.; Xue, Q.; Wang, Z.; Gao, J. Analysis and compensation for lateral chromatic aberration in color coding structured light 3D measurement system. Sensors 2016, 16, 1426. [CrossRef][PubMed] 16. Malacara, D. Phase shifting interferometry. In Optical Shop Testing, 3rd ed.; Wiley-Interscience: New York, NY, USA, 2007; pp. 550–557. 17. Huang, L.; Kemao, Q.; Pan, B.; Asundi, A. Comparison of Fourier transform, windowed Fourier transform, and wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry. Opt. Lasers Eng. 2010, 48, 141–148. [CrossRef] 18. Creath, K. Phase Measurement Interferometry Techniques; Elsevier Science Publishers: Amsterdam, The Netherlands, 1988; pp. 339–393. 19. Xu, J.; Liu, X.; Wan, A. An absolute phase technique for 3D profile measurement using four-step structured light pattern. Opt. Lasers Eng. 2012, 50, 1274–1280. [CrossRef] 20. Zuo, C.; Chen, Q.; Gu, G. High-speed three-dimensional profilometry for multiple objects with complex shapes. Opt. Soc. Am. 2012, 20, 19493–19510. [CrossRef][PubMed] 21. Tao, T.; Chen, Q.; Da, J. Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system. Opt. Express. 2016, 18, 20253–20269. [CrossRef][PubMed] 22. Zuo, C.; Chen, Q.; Gu, G. High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tri-polar pulse width-modulation fringe projection. Opt. Lasers Eng. 2013, 51, 953–960. [CrossRef] 23. Towers, C.; Towers, D.; Jones, J. Optimum frequency selection in multi-frequency interferometry. Opt. Lett. 2003, 28, 887–889. [CrossRef][PubMed] 24. Zhang, Z.; Towers, C.; Towers, D. Time efficient color fringe projection system for simultaneous 3D shape and color using optimum 3-frequency selection. Opt. Express. 2006, 14, 6444–6455. [CrossRef][PubMed] 25. Zuo, C.; Huang, L.; Chen, Q. Temporal phase unwrapped algorithms for fringe projection profilometry: A comparative review. Opt. Lasers Eng. 2016, 85, 84–103. [CrossRef] 26. Peng, J.; Liu, X.; Deng, D.; Guo, H.; Cai, Z.; Peng, X. Suppression of projector in phase-measuring profilometry by projecting adaptive fringe patterns. Opt. Express. 2016, 24, 21846–21860. [CrossRef] [PubMed] 27. Huang, S.; Liu, Y.; Zhang, Z. Pixel-to-pixel correspondence alignment method of a 2CCD camera by using absolute phase map. Opt. Eng. 2015, 54, 064101. [CrossRef] 28. Working principle of TFT-LCD. Available online: http://www.newmaker.com/disp_art/124/12061.html (accessed on 19 February 2017). 29. Camera Calibration Toolbox for Matlab. Available online: http://www.vision.caltech.edu/bouguetj/calib_ doc/htmls/example.html (accessed on 17 April 2017).

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).