sensors

Article Model-Free Correction Based on Phase Analysis of Fringe-Patterns

Jiawen Weng 1,†, Weishuai Zhou 2,†, Simin Ma 1, Pan Qi 3 and Jingang Zhong 2,4,*

1 Department of Applied Physics, South China Agricultural University, Guangzhou 510642, China; [email protected] (J.W.); [email protected] (S.M.) 2 Department of Optoelectronic Engineering, Jinan University, Guangzhou 510632, China; [email protected] 3 Department of Electronics Engineering, Guangdong Communication Polytechnic, Guangzhou 510650, China; [email protected] 4 Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Guangzhou 510650, China * Correspondence: [email protected] † The authors contributed equally to this work.

Abstract: The existing lens correction methods deal with the distortion correction by one or more specific image distortion models. However, distortion determination may fail when an unsuitable model is used. So, methods based on the distortion model would have some drawbacks. A model- free lens distortion correction based on the phase analysis of fringe-patterns is proposed in this paper. Firstly, the mathematical relationship of the distortion displacement and the modulated phase of the sinusoidal fringe-pattern are established in theory. By the phase demodulation analysis of the fringe-pattern, the distortion displacement map can be determined point by point for the whole distorted image. So, the image correction is achieved according to the distortion displacement map by a model-free approach. Furthermore, the distortion center, which is important in obtaining an optimal result, is measured by the instantaneous frequency distribution according to the character of   distortion automatically. Numerical simulation and experiments performed by a wide-angle lens are carried out to validate the method. Citation: Weng, J.; Zhou, W.; Ma, S.; Qi, P.; Zhong, J. Model-Free Lens Keywords: distortion measurement; optical distortion; camera calibration; fringe analysis Distortion Correction Based on Phase Analysis of Fringe-Patterns. Sensors 2021, 21, 209. https://doi.org/ 10.3390/s21010209 1. Introduction

Received: 29 November 2020 Camera suffer from ; thus, the nonlinear distortion would Accepted: 27 December 2020 be introduced into the captured image, especially for the lens with large field of view Published: 31 December 2020 (FOV). Therefore, distortion correction is a significant problem in the analysis of digital im- ages. The accurate distortion correction of lens is especially crucial for any computer vision Publisher’s Note: MDPI stays neu- task [1–4] that involves quantitative measurements in the geometric position determination, tral with regard to jurisdictional clai- dimensional measurement, image recognition, and so on. Existing methods for distor- ms in published maps and institutio- tion correction can be divided into two main categories: traditional vision measurement nal affiliations. methods and learning-based methods. For the traditional vision measurement methods, it falls into the following main types. One relies on a known measuring pattern [5–7], including straight lines, vanishing points, and a planar pattern. It estimates the parameters of the un-distortion function by a Copyright: © 2020 by the authors. Li- censee MDPI, Basel, Switzerland. known pattern to achieve correction. It is simple and effective, but the distortion center This article is an open access article in nonlinear optimization would lead to instabilities [8]. The second is the multiple view distributed under the terms and con- correction method [9–11], which utilizes the correspondences between points in different ditions of the Creative Commons At- images to measure lens distortion parameters. It achieves auto-correction without any tribution (CC BY) license (https:// special pattern but requires a set of images captured from different views. The third is creativecommons.org/licenses/by/ the plumb-line method [12–15], which makes the distortion parameter estimation by some 4.0/). distorted circular arcs. Accurate circular arcs detection is a very important aspect for the

Sensors 2021, 21, 209. https://doi.org/10.3390/s21010209 https://www.mdpi.com/journal/sensors Sensors 2021, 21, 209 2 of 16

robustness and flexibility of this kind of method. Human supervision and some robust algorithms for circular arcs detection are developed. All the above-mentioned methods rely on some specific distortion models, such as the commonly used even-order polynomial model [16] proposed by Brown, division model [17] proposed by Fitzgibbon, and fisheye lens model [18]. The whole image achieves distortion correction by employing several characteristic points or lines for the analysis to find out the distortion parameters. It would have poor generalization abilities to other distortion models. Furthermore, it should be noted that all these distortion models achieve ideal circular symmetry. For the learning-based methods, it can be divided into two kinds. The first one is the parameter-based method [19–21], which estimates the distortion parameters by using convolutional neural networks (CNNs) in terms of the single parameter division model or fisheye lens model. It would provide more accurate distortion parameters estimation. However, the networks are still trained by a synthesized distorted image dataset derived from some specific distortion models, which causes inferior results for other types of distortion models. Recently, some distortion correction methods demanding no specific distortion models by deep learning have been proposed. Liao et al. [22] introduced model- free distortion rectification by introducing a distortion distribution map. Li et al. [23] proposed bind geometric distortion correction by using the displacement field between distorted images and corrected images. For these methods, there are different types of distortion models involved into the synthesized distorted image dataset for training, and the distortion distribution map or displacement field are obtained according to these distortion models. It means that the distortion correction is still built on some distortion models, and the employed distortion models are circular symmetry. However, none of the existing mathematical distortion models can work well for all the real lenses with fabrication artifacts. In addition, there are some model-free distortion correction methods. Munji [24] and Tecklenburg et al. [25] proposed a correction model based on finite elements. The remaining errors in the sensor space can be corrected by interpolation with the finite element. However, the interpolation effect will be reduced when the measured image points are not enough. Grompone von Gioi et al. [26] designed a model-free distortion correction method. It involves great computation and is time consuming, because the optimization algorithm and loop validation are used for high precision. Fringe-pattern phase analysis [27–29], due to its advantage of highly automated full- field analysis, is widely used in various optical measurement technologies, such as interfer- ometry, digital holography, moire fringe measurement, and so on. It is also used for lens distortion determination. Bräuer-Burchardt et al. [30] achieved lens distortion correction by Phasogrammetry. The experiment system and image processing are a bit complicated be- cause both the projector lens distortion and distortion are involved. Li et al. [31] eliminated the projector lens distortion by employing the Levenberg–Marquardt algorithm for improving the measurement accuracy of fringe projection profilometry, where the lens distortion is described by a polynomial distortion model. We employed the phase analysis of one-dimensional measuring fringe-pattern and polynomial fitting for simple lens distortion elimination by assuming that the distortion is ideal circular symmetry [32]. In this paper, a model-free lens distortion correction based on the phase analysis of a fringe-pattern is proposed. Unlike the method in [32], the proposed method does not rely on circular symmetry assumption. In order to avoid using the distortion model and circular symmetry assumption, the proposed method uses two sets of directional orthogonal fringe patterns to obtain the distortion displacement of all points in the distorted image. Moreover, considering that the distortion center may be displaced from the image center by many factors, such as an offset of the lens center from the sensor center of the camera, a slight of the sensor plane with respect to the lens, a misalignment of the individual components of a compound lens, and so on, the distortion center should be measured in accordance with specific conditions instead of being assumed as the image center directly. For the proposed method, the distortion center is measured by the instantaneous Sensors 2021, 21, x FOR PEER REVIEW 3 of 17

Sensors 2021, 21, 209 accordance with specific conditions instead of being assumed as the image center directly.3 of 16 For the proposed method, the distortion center is measured by the instantaneous fre- quency distribution according to the character of distortion automatically. The theoretical description of distortion measurement based on the phase analysis of the sinusoidal frequencyfringe-pattern distribution is introduced. according The to numerical the character simulation of distortion results automatically. and experimental The theoretical results descriptionshow the validity of distortion and advantages measurement of the based proposed on the method. phase analysis of the sinusoidal fringe- pattern is introduced. The numerical simulation results and experimental results show the validity2. Principle and of advantages Model-Free of theLens proposed Distortion method. Correction Method 2.2.1. Principle Lens Distortion of Model-Free Lens Distortion Correction Method 2.1. LensA simple Distortion grid chart of the negative (barrel) distortion, as shown in Figure 1, is em- ployedA simpleto present grid the chart theoretical of the negative description (barrel) of lens distortion, distortion as shown simply in and Figure clearly.1, is em-The ployedblue lines to and present the red the theoreticallines are corresponding description ofto lensthe lines distortion before simply and after and distortion, clearly. The re- bluespectively. lines andIt is theeasy red to linesfind that are correspondingthe undistorted topoint the linesP comes before to and theafter distorted distortion, point respectively.Q after distortion. It is easy According to find that to th thee geometry undistorted shown point in FigureP comes 1, toPQ the is distorted the distortion point Q after distortion. According to the geometry shown in Figure1, PQ is the distortion displacement Δr according to the distorted point Q with Δ>r 0 for the barrel distor- displacement ∆r according to the distorted point Q with ∆r > 0 for the barrel distortion tion and Δ

q 22 |∆r|Δ=Δ=rxy∆x2 + +Δ∆y2 (1)(1)

with Δ=xx − x and Δ=yy − y. For image distortion correction, the most important with ∆x = xPQP − xQ and ∆y = PQyP − yQ. For image distortion correction, the most important thing is to decidedecide thethe distortiondistortion displacementdisplacement ∆Δrr, i.e.,, i.e.,∆ xΔandx and∆y ,Δ aty each, at each point. point. Symbols Sym- denotationbols denotation is given is given in nomenclature, in nomenclature, as shown as shown in Table in Table1. 1.

Figure 1. Distortion schematic diagram for the negative (barrel) (barrel) distortion, distortion, where where the the blue blue and and red red lines are corresponding toto thethe fringe-patternfringe-pattern stripesstripes beforebefore andand afterafter distortion,distortion, respectively.respectively.

Table 1. Nomenclature. Table 1. Nomenclature. Term Description u u Term Description Ix ; Iy undistorted longitudinal and transverse fringe-patterns u u d d Ix ; Iy undistorted longitudinal and transverse fringe-patterns Ix ; I distorted longitudinal and transverse fringe-patterns y d d d d Ix ; Iy distorted longitudinal and transverse fringe-patterns I xn, ; I yn, distorted longitudinal and transverse fringe-patterns with phase shift Id ; Id ϕ ϕ x,n y,n distorted longitudinal and transverse fringe-patterns with phase shift x; y phase of distorted longitudinal and transverse fringe-patterns ϕx; ϕy phase of distorted longitudinal and transverse fringe-patterns φϕ,Δ ; φϕ,Δ xxyy φx, ∆modulatedϕx; phase of distorted longitudinal and transverse fringe-patterns φ φ φ , ∆ϕ modulated phase of distorted longitudinal and transverse fringe-patterns xo ; yo y yinitial phase of distorted longitudinal and transverse fringe-patterns φxo; φyo initial phase of distorted longitudinal and transverse fringe-patterns fx; fy instantaneous frequency of distorted longitudinal and transverse fringe-patterns fx; fy instantaneous frequency of distorted longitudinal and transverse fringe-patterns fxo ; f yo fundamental frequency of distorted longitudinal and transverse fringe-patterns fxo; fyo fundamental frequency of distorted longitudinal and transverse fringe-patterns Δx Δy ; ∆x; ∆y distortion displacementdistortion displacement of points on ofdistorted points onfringe-pattern distorted fringe-pattern 0 0 ∆x ; ∆y distortion displacement of points on corrected fringe-pattern (xm,n, ym,n) point on distorted image corresponding to point on corrected image with (m, n) d d being integer

Sensors 2021, 21, x FOR PEER REVIEW 4 of 17

Δx′; Δy′ distortion displacement of points on corrected fringe-pattern mn,, mn (,xydd ) point on distorted image corresponding to point on corrected image with (m, n) being integer

2.2. Phase Analysis of Fringe-Pattern Measuring Template Two sets of sinusoidal fringe-patterns parallel to the y-axis and x-axis of the display coordinate system, i.e., the longitudinal and transverse fringe-patterns, are employed as measuring templates for phase demodulation analysis to obtain the distortion displace- ment Δx and Δy , respectively. Figure 2 shows the sinusoidal fringe-patterns before and after barrel distortion, respectively. The undistorted longitudinal and transverse fringe- patterns are expressed as: u =+πφ + IxyABxxoxo(, ) cos[2 fx ]

 u =+πφ + (2) IxyAByyoyo(, ) cos[2 fy ] Sensors 2021, 21, 209 . 4 of 16 The corresponding distorted fringe-patterns are: d =+πφ + + φ IxyABxxoxxo(, ) cos[2 fx (, xy ) ] 2.2. Phase Analysis of Fringe-Pattern Measuring Template (3) IxyABd (, )=+ cos[2πφ fy + (, xy ) + φ ] Two sets of sinusoidal yyoyyo fringe-patterns parallel to the y-axis and x-axis of the display coordinate system, i.e., the longitudinal and transverse fringe-patterns, are employed as where A is the background intensity; BA/ is the visibility of the fringe-pattern; f measuring templates for phase demodulation analysis to obtain the distortion displacementxo and f are the fundamental spatial frequency of the longitudinal and transverse sinus- ∆x andyo ∆y, respectively. Figure2 shows the sinusoidal fringe-patterns before and after oidal fringe-pattern, respectively; φ (,xy ) and φ (,xy ) are the modulated phase barrel distortion, respectively. The undistortedx longitudinaly and transverse fringe-patterns are expressed as: φ φ caused by the lens distortion; xo and are the initial phase. By the analysis of the  u yo Ix (x, y) = A + B cos[2π fxox + φxo] fringe-patterns, the modulatedu phase can be obtained point by. point, so the distortion (2) I (x, y) = A + B cos[2π fyoy + φyo] displacement Δx and Δy aty each point is decided.

(a) (b) (c) (d)

FigureFigure 2. 2. SinusoidalSinusoidal fringe-patterns fringe-patterns before before and and after after barrel barrel distortion. distortion. (a ()a Undistorted) Undistorted longitudinal longitudinal fringe-pattern;fringe-pattern; (b (b) )Undistorted Undistorted transverse transverse fringe-pattern; fringe-pattern; (c ()c Distorted) Distorted longitudinal longitudinal fringe-pattern; fringe-pattern; (d) Distorted transverse fringe-pattern. (d) Distorted transverse fringe-pattern.

2.2.1. PhaseThe corresponding of Distorted Fringe-Pattens distorted fringe-patterns are: The phase-shifting method [33], providing high precision point-to-point phase re- ( d trieval from fringe-patternsIx (x due, y) =to itsA + bestB cos sp[atial2π f xolocalizationx + φx(x, y merit,) + φxo is] employed for phase d (3) demodulation. The intensityIy (x, ydistributions) = A + B cos of[2 πthefyo longitudinaly + φy(x, y) +andφyo transverse] sinusoidal fringe-patterns after distortion are: where A is the background intensity; B/A is the visibility of the fringe-pattern; fxo and  2(π n − 1) fyo are the fundamental spatiald frequency=+ of theϕ longitudinal + and transverse sinusoidal IxyABxn, (, ) cos x (, xy ) N fringe-pattern, respectively; φx(x, y) and φy(x, y) are the modulated phase caused by the  π − (4) lens distortion; φxo and φyo ared the initial=+ phase.ϕ By the + analysis2(n 1) of the fringe-patterns, the IxyAByn, (, ) cos y (, xy ) modulated phase can be obtained point by point, so the distortionN displacement ∆x and ∆y at each point is decided. ϕπφφ=+ + ϕπφφ=++ where xxoxxo(,xy ) 2 f x (, xy ) and yyoyyo(,xy ) 2 f y (, xy ) are the phase of longitudinal2.2.1. Phase ofand Distorted transverse Fringe-Pattens fringe-pattern, respectively; nN=1, 2, , and N =4 . By The phase-shifting method [33], providing high precision point-to-point phase re- trieval from fringe-patterns due to its best spatial localization merit, is employed for phase demodulation. The intensity distributions of the longitudinal and transverse sinusoidal fringe-patterns after distortion are:

 d h 2π(n−1) i  Ix,n(x, y) = A + B cos ϕx(x, y) + N (4) d h 2π(n−1) i  Iy,n(x, y) = A + B cos ϕy(x, y) + N

where ϕx(x, y) = 2π fxox + φx(x, y) + φxo and ϕy(x, y) = 2π fyoy + φy(x, y) + φyo are the phase of longitudinal and transverse fringe-pattern, respectively; n = 1, 2, ··· , N and N = 4. By employing the four-step phase-shifting method, the wrapped phase distribution can be acquired from the distorted fringe-patterns as:

  Id (x,y)−Id (x,y)   ϕ (x, y) = arctan x,4 x,2  x Id (x,y)−Id (x,y) x,1 x,3 . (5)  Id (x,y)−Id (x,y)   y,4 y,2  ϕy(x, y) = arctan d d Iy,1(x,y)−Iy,3(x,y) Sensors 2021, 21, 209 5 of 16

The calculated phase is wrapped in [−π, π] by arctangent calculation. So, the unwrap- ping algorithm [34] is performed, and the continuous phase distribution of the distorted fringe-pattern is obtained. More number of phase shifts would reduce the distortion of the cosine signal and improve the precision of phase demodulation.

2.2.2. Modulated Phase Calculation by Distortion Center Detection

The modulated phases φx(x, y) and φy(x, y), which contain the distortion information, are calculated by subtracting the phase (2π fxox + φx0) and (2π fyoy + φy0) from ϕx(x, y) and ϕy(x, y), respectively. So, in order to obtain the modulated phase, the fundamental spatial frequency fxo and fyo should be detected. According to the distortion character, there is no distortion at the distortion center. Therefore, the fundamental spatial frequency of the fringe-pattern is detected at the position of the distortion center. It should be noticed that the distortion center may be significantly displaced from the center of the image by many factors. For the proposed method, the distortion center can be measured by the instantaneous frequency distribution automatically. It is at the position where the minimum instantaneous frequency appears for the barrel distortion and the maximum instantaneous frequency appears for the pincushion distortion. So, we perform the partial derivative operation along the x and y direction of the phase ϕx(x, y) and ϕy(x, y) to get the instantaneous frequency fx and fy respectively as:

( 1 d fx(x, y) = 2π dx ϕx(x, y) 1 d . (6) fy(x, y) = 2π dy ϕy(x, y)

By judging the variation trend of instantaneous frequency, the type of distortion could be determined. When the instantaneous frequency increases along the radial direction from the distortion center, it is the barrel distortion. Otherwise, it is the pincushion distortion. Then, by detecting the position of the minimum fx and fy along the x and y direction for the barrel distortion, or the position of the maximum fx and fy for the pincushion distortion, the distortion center can be decided as (x0, y0). So, the corresponding fundamental frequencies are determined at (x0, y0) as:

 1 ∂ϕx(x,y)  fxo =  2π ∂x x = x0   y = y0 . (7) 1 ∂ϕy(x,y)  fyo =  2π ∂y x = x0   y = y0

According to fxo and fyo, the phase distribution 2π fxox and 2π fyoy can be calculated, and the modulated phase can be rewritten as ∆ϕx(x, y) and ∆ϕy(x, y):

 ∆ϕ (x, y) = ϕ (x, y) − [2π f (x − x ) + ϕ (x , y )] x x xo 0 x 0 0 (8) ∆ϕy(x, y) = ϕy(x, y) − [2π fyo(x − x0) + ϕy(x0, y0)]

where ϕx(x0, y0) and ϕy(x0, y0) are the phase of the longitudinal and transverse sinusoidal fringe-patterns at the distortion center.

2.2.3. Relationship of Modulated Phase and Distortion Displacement The distortion displacement ∆x(x, y) and ∆y(x, y) are obtained by the modulated phase as:  ( ) ∆x(x, y) = ∆ϕx x,y  2π fxo ∆ϕ (x,y) . (9)  ∆y(x, y) = y 2π fyo Therefore, the measurement of the distortion displacement is transferred into the calculation of the modulated phase by the fringe-pattern analysis. In a word, the distortion Sensors 2021, 21, 209 6 of 16

displacement ∆x(x, y) and ∆y(x, y) can be measured point by point according to the phase demodulation analysis of the distorted sinusoidal fringe-patterns.

2.2.4. Distortion Correction The inverse mapping method with the bilinear interpolation is employed for the image distortion correction. It should be noticed that the distortion displacement obtained by Equation (9) corresponds to the points on the distorted fringe-pattern. So, we recalculate the distortion displacement ∆0x and ∆0y, which correspond to the points on the corrected fringe-pattern. Firstly, we establish the discrete numerical correspondences of the new distortion displacement ∆0x and ∆0y with the points (x + ∆x, y + ∆y) on the corrected fringe-pattern. It should be noticed that the calculated coordinate value of the points (x + ∆x, y + ∆y) on the corrected fringe-pattern may not be integer. So, we calculate the distortion displacement ∆0x(m, n) and ∆0y(m, n) by performing the bicubic interpolation algorithm, where (m, n) is the integer coordinate value of the corrected point. Therefore, the distribution of the distortion displacement corresponding to the points on the corrected fringe-pattern is decided, and the image distortion correction can be achieved by adopting the inverse mapping method directly by:

( m,n 0 xd = m − ∆ x(m, n) m,n 0 (10) yd = n − ∆ y(m, n)

m,n m,n where (xd , yd ) is the corresponding distorted point. Finally, the bilinear interpolation algorithm is employed for the image interpolation not only because of the simple and convenient calculation process but also due to its ability of overcoming the problem of gray-scale discontinuity.

3. Numerical Simulation Numerical simulation is performed to verify the validity of the method we introduced. Firstly, two sets of longitudinal and transverse sinusoidal fringe-patterns of 512 × 512 pixels with a phase shift of 0, π/2, π, 3π/2 are employed as measuring patterns. The spatial period of the undistorted fringe-pattern is 16 pixels. Then, the single parameter division model [17] with distortion parameter λ = −1 × 10−6, given by Equation (11), is employed to generate the barrel distorted fringe-patterns. r = d ru 2 (11) 1 + λrd

where ru and rd are the Euclidean distance of the distorted and undistorted point to the distortion center, respectively. Moreover, the distortion center is shifted away from the image center (256, 256) to (273, 289). Figure3 shows the corresponding distorted fringe- patterns, respectively. It should be noticed that the proposed method is model-free. The single parameter division model employed here, which can be replaced by any other distortion model, is just to generate a simulated distorted image. The analysis process and the corresponding results can be described as follows. Step 1: Employ the four-step phase-shifting method for the phase demodulation and perform the unwrapping algorithm to obtain the phase distribution of the distorted longitudinal and transverse fringe-patterns. Figure4a,b are the corresponding wrapped phase. Step 2: Calculate the instantaneous frequency fx and fy. It is determined as barrel distortion because the instantaneous frequency increases along the radial direction from the distortion center. By detecting the minimum fx along the x direction, we can find that the distortion center is at the 273th column. Similarly, by detecting the minimum fy along the y direction, we can find that the distortion center is at the 289th row. So, the distortion center is at (273, 289). Figure5a,b fx and fy, where the red dotted lines are at the 289th row and 273th column, respectively. Figure5c,d are the corresponding distribution of fx Sensors 2021, 21, x FOR PEER REVIEW 7 of 17

r = d ru 2 (11) 1+ λr Sensors 2021, 21, x FOR PEER REVIEW d 7 of 17

where ru and rd are the Euclidean distance of the distorted and undistorted point to the distortion center, respectively. Moreover, the distortion center is shifted away from the image center (256, 256) to (273,289) . Figure 3 shows the corresponding distorted fringe- r = d patterns, respectively. It should be noticed that the proposedru method2 is model-free. The (11) 1+ λr single parameter division model employed here, which can be dreplaced by any other dis- Sensors 2021, 21, 209 tortion model, is just to generate a simulated distorted image. 7 of 16 where ru and rd are the Euclidean distance of the distorted and undistorted point to the distortion center, respectively. Moreover, the distortion center is shifted away from the image center (256, 256) to (273,289) . Figure 3 shows the corresponding distorted fringe- atpatterns, the 289th respectively. row and f yItat should the 273th be noticed column, that respectively. the proposed The method fundamental is model-free. frequency The issinglefxo = parameterfyo = 0.0625 divisionand themodel phase employedϕx(x0, y here,0) and whichϕy(x 0can, y0 )beat replaced the distortion by any center other aredis- obtained.tortion model, is just to generate a simulated distorted image.

Figure 3. Simulated distorted fringe-patterns of size 512× 512 pixels by a single parameter divi- −6 sion model with a distortion parameter λ =110 −× and distortion center (273, 289) . The longitu- dinal fringe-patterns with a phase shift of 0,πππ / 2, , 3 / 2 are at the first row and the transverse fringe-patterns with a phase shift of 0,πππ / 2, , 3 / 2 are at the second row.

The analysis process and the corresponding results can be described as follows.

Step 1:FigureFigure Employ 3.3.Simulated Simulated the four-step distorteddistorted phase-shifti fringe-patterns fringe-patternsng method of of size size for512 the512× phase×512 512pixels pixelsdemodulation by by a singlea single and parameter parameter division divi- −6 perform themodelsion unwrapping model witha with distortion algorithm a distortion parameter to parameter obtainλ = the− 1λ phase ×=110 −×10 − distribution 6 and and distortion distortion of the center distortedcenter(273, 289(273, lon-). 289) The. longitudinalThe longitu- gitudinal and transverse fringe-patterns. Figure 4a,b are the corresponding wrapped fringe-patternsdinal fringe-patterns with a with phase a phase shift of shift0, π of/ 2,0,ππππ, / 32,π/ ,2 3are / 2 at are the at first the rowfirst androw theand transverse the transverse fringe- phase. patternsfringe-patterns with aphase with a shift phase of 0,shiftπ/2, of π0,,πππ 3 /π 2,/2 , are3 / at2 the are secondat the second row. row.

The analysis process and the corresponding results can be described as follows. Step 1: Employ the four-step phase-shifting method for the phase demodulation and perform the unwrapping algorithm to obtain the phase distribution of the distorted lon- gitudinal and transverse fringe-patterns. Figure 4a,b are the corresponding wrapped phase.

Sensors 2021, 21, x FOR PEER REVIEW 8 of 17

at the 289th row(a) and 273th column, respectively. Figure 5c,d(b) are the corresponding distri-

bution of f x at the 289th row and f y at the 273th column, respectively. The fundamen- Figure 4. WrappedFigure 4. phase.Wrapped (a) Wrapped phase. ( aphase) Wrapped of distorted phase longitudinal of distorted fringe-pattern; longitudinal fringe-pattern; (b) (b) Wrapped tal frequency is ff==0.0625 and the phase ϕ (,xy ) and ϕ (,xy ) at the distor- Wrapped phasephase of of a adistorted distortedxo transv transverse yo erse fringe-pattern fringe-pattern transverse. transverse.x 00 y 00 tion center are obtained.

Step 2: Calculate the instantaneous frequency f x and f y . It is determined as barrel distortion because the instantaneous frequency increases along the radial direction from the distortion center. By detecting the minimum f along the x direction, we can find x that the distortion center is at the( a273th) column. Similarly, by detecting the minimum(b) f y y along the Figure direction, 4. Wrapped we can phase. find (thata) Wrapped the distorti phaseon of center distorted is at longitudinal the 289th row. fringe-pattern; So, the (b)

distortion centerWrapped is at phase (273,289) of a distorted. Figure transv 5a,b ersef x andfringe-pattern f y , where transverse. the red dotted lines are

Step 2: Calculate the instantaneous frequency f and f . It is determined as barrel x y distortion because( athe) instantaneous frequency increases along(b) the radial direction from

the distortion center. By detecting the minimum f x along the x direction, we can find

that the distortion center is at the 273th column. Similarly, by detecting the minimum f y along the y direction, we can find that the distortion center is at the 289th row. So, the

distortion center is at (273,289) . Figure 5a,b f x and f y , where the red dotted lines are

(c) (d) f f FigureFigure 5. 5.InstantaneousInstantaneous frequency. frequency. (a) (ax;) (fbx);( fby;) (cfy) ;(xc)atf xtheat the289th 289th row rowwith withthe minimum the minimum point point at

atthe the 173th173th column;column; (d) ffyy atat thethe 273th column with with the the mi minimumnimum point point at atthe the 289th 289th row. row.

Step3: Modulated phase calculation. According to Equation (8), the modulated phase Δϕ Δϕ x (,xy ) and y (,xy ) are obtained as shown in Figure 6a,b. So, the distortion dis- placement Δxxy(, ) and Δyxy(, ) are obtained point by point according to Equation (9), as shown in Figure 6c,d. The distortion displacement Δr can be obtained by Equation (1), and the maximum error of Δr is 0.24 pixels. In order to further validate the proposed method, distortion parameter estimation according to the single parameter division − model is performed. The estimated distortion parameter is λ =0.992 −×010 6 with the relative error of 0.8%.

(a) (b) (c) (d) Δϕ Δϕ Δ Δ Figure 6. Modulated phase and distortion displacement. (a) x (,xy ); (b) y (,xy ); (c) xxy(, ); (d) yxy(, ).

Step4: Distortion displacement map calculation. We establish the discrete numerical correspondences of the new distortion displacement Δ′x and Δ′y with the points (,xxyy+Δ +Δ ) on the corrected fringe-pattern. Table 2 shows some of the distortion

Sensors 2021, 21, x FOR PEER REVIEW 8 of 17

at the 289th row and 273th column, respectively. Figure 5c,d are the corresponding distri-

bution of f x at the 289th row and f y at the 273th column, respectively. The fundamen- == ϕ ϕ tal frequency is ffxo yo 0.0625 and the phase x (,xy00 ) and y (,xy00 ) at the distor- tion center are obtained.

(a) (b)

(c) (d)

Figure 5. Instantaneous frequency. (a) fx; (b) fy; (c) fx at the 289th row with the minimum point Sensors 2021, 21, 209 8 of 16 at the 173th column; (d) fy at the 273th column with the minimum point at the 289th row.

Step3: Modulated phase calculation. According to Equation (8), the modulated phase Δϕ Δϕ x (,Step3:xy ) and Modulated y (,xy )phase are obtained calculation. as shown According in Figure to Equation 6a,b. So, (8), the thedistortion modulated dis- phaseplacement∆ϕx (xΔ,xxyy(,) and ) and∆ϕy (Δxyxy,(,y) are ) are obtained obtained as shownpoint by in point Figure according6a,b. So, to the Equation distortion (9), displacement ∆x(x, y) and ∆y(x, y) are obtained point by point according to Equation (9), as shown in Figure 6c,d. The distortion displacement Δr can be obtained by Equation as shown in Figure6c,d. The distortionΔ displacement ∆r can be obtained by Equation (1), (1),and and the the maximum maximum error error of of∆ r isr 0.24 is 0.24 pixels. pixels. In In order order to to further further validate validate thethe proposedproposed method, distortiondistortion parameter parameter estimation estimation according according to the to singlethe single parameter parameter division division model λ −×−6 modelis performed. is performed. The estimated The estima distortionted distortion parameter parameter is λ = − 0.9920is =0.992× 10−6010with the with relative the relativeerror of 0.8%.error of 0.8%.

(a) (b) (c) (d) Δϕ Δϕ Δ Δ FigureFigure 6.6. ModulatedModulated phasephase andand distortiondistortion displacement.displacement. ((aa)) ∆ϕxx((,xxy, y) );(; (bb)) ∆ϕy(y (,xxy, y) );(; (cc))∆ xxxy((,x, y) ;(; (d)) ∆yyxy((,x, y ).

Step4: Distortion Distortion displaceme displacementnt map map calculation. calculation. We We establish establish the the discrete discrete numerical numer- ical correspondences of the new distortion displacement ∆0x and ∆0y with the points correspondences of the new distortion displacement Δ′x and Δ′y with the points (x + ∆x, y + ∆y) on the corrected fringe-pattern. Table2 shows some of the distortion (,xxyy+Δ +Δ ) on the corrected fringe-pattern. Table 2 shows some of the distortion displacement of the points on the distorted fringe-pattern and the corresponding points on the corrected fringe-pattern. We find that the calculated coordinates of the points on the corrected fringe-pattern (x + ∆x, y + ∆y) are not integers. So, we calculate the distortion 0 0 displacement ∆ x(m, n) and ∆ y(m, n) by performing the bicubic interpolation algorithm, where (m, n) is the integer coordinate value of the corrected point.

Table 2. Some calculated distortion displacement (pixel).

distortion displacement of points ∆x(x, y) 10.92 (197,121) 11.06 (198,121) 11.20 (199,121) on distorted fringe-pattern ∆y(x, y) 6.71 (197,121) 6.76 (198,121) 6.81 (199,121) 0 distortion displacement of points ∆ x(x + ∆x, y + ∆y) 10.92 (207.92,127.71) 11.06 (209.06,127.76) 11.20 (210.20,127.81) on corrected fringe-pattern ∆0y(x + ∆x, y + ∆y) 6.71 (207.92,127.71) 6.76 (209.06,127.76) 6.81 (210.20,127.81) 0 distortion displacement of integer ∆ x(m, n) 10.94 (208,128) 11.06 (209,128) 11.19 (210,128) points on corrected fringe-pattern ∆0y(m, n) 6.73 (208,128) 6.78 (209,128) 6.82 (210,128)

Step5: According to the distortion displacement map, image distortion correction can be performed by employing the inverse mapping with the bilinear interpolation directly. A numerical simulation of the distorted checkerboard is performed. Figure7a shows the distorted checkerboard image with the same distortion parameter of λ = −1 × 10−6 and distortion center (273, 289). Figure7b is the corrected result by the proposed method, where the red points represent the corners. Figure8 is the corresponding corners image, where the red asterisks represent the corners of the distorted checkerboard image and the blue points represent the corners of the corrected checkerboard image. The distortion displacement at the left top point is ∆r = 27.30 pixel. The curvature radius of the red line formed by the left points on the distorted checkerboard image is 2.2148 × 103 pixels, and that of the blue line corresponding to the corrected checkerboard image is 1.4731 × 105 pixels. It means that the circular arc is corrected to be straight. Sensors 2021, 21, x FOR PEER REVIEW 9 of 17 Sensors 2021, 21, x FOR PEER REVIEW 9 of 17

displacementdisplacement of the points of onthe the points distorted on the fringe-pattern distorted fringe-pattern and the corresponding and the corresponding points points on the correctedon the fringe-pattern. corrected fringe-pattern. We find that We the find calculated that the coordinatescalculated coordinates of the points of onthe points on (,xxyy+Δ +Δ ) the correctedthe fringe-pattern corrected fringe-pattern (,xxyy+Δ are not +Δ integers. ) are not So, integers. we calculate So, we the calculate distor- the distor- Δ′xmn(,) Δ′ymn(,) tion displacementtion displacement and Δ′ xmn(,) and by Δ performing′ymn(,) by performing the bicubic theinterpolation bicubic interpolation algo- algo- (,)mn rithm, whererithm, where is the (,)mn integer is thecoordinate integer coordinatevalue of the value corrected of the point. corrected point.

Table 2. Some calculated distortion displacement (pixel). Table 2. Some calculated distortion displacement (pixel).

Δ distortion displacement of points on distorted x(,xy ) Δ 10.92 (197,121) 11.06 (198,121) 11.20 (199,121) distortion displacement of points on distorted x(,xy ) 10.92 (197,121) 11.06 (198,121) 11.20 (199,121) fringe-pattern Δyxy(, ) 6.71 (197,121) 6.76 (198,121) 6.81 (199,121) fringe-pattern Δyxy(, ) 6.71 (197,121) 6.76 (198,121) 6.81 (199,121) Δ+Δ+Δ′ distortion displacement of points on corrected x(,xxyyΔ+Δ+Δ )′ 10.92 (207.92,127.71) 11.06 (209.06,127.76) 11.20 (210.20,127.81) distortion displacement of points on corrected x(,xxyy ) 10.92 (207.92,127.71) 11.06 (209.06,127.76) 11.20 (210.20,127.81) fringe-pattern Δ+Δ+Δ′yx(, xy y ) 6.71 (207.92,127.71) 6.76 (209.06,127.76) 6.81 (210.20,127.81) fringe-pattern Δ+Δ+Δ′yx(, xy y ) 6.71 (207.92,127.71) 6.76 (209.06,127.76) 6.81 (210.20,127.81) Δ′ distortion displacement of integer points on xmn(,) Δ′ 10.94 (208,128) 11.06 (209,128) 11.19 (210,128) distortion displacement of integer points on xmn(,) 10.94 (208,128) 11.06 (209,128) 11.19 (210,128) corrected fringe-pattern Δ′ymn(,) 6.73 (208,128) 6.78 (209,128) 6.82 (210,128) corrected fringe-pattern Δ′ymn(,) 6.73 (208,128) 6.78 (209,128) 6.82 (210,128)

Step5: AccordingStep5: toAccording the distortion to the displace distortionment displace map, imagement map, distortion image correction distortion can correction can be performedbe performedby employing by employingthe inverse themapping inverse with mapping the bilinear with the interpolation bilinear interpolation directly. directly. A numerical simulation of the distorted checkerboard is performed. Figure 7a shows A numerical simulation of the distorted checkerboard is performed.−6 Figure 7a shows λ= −×110 −6 the distortedthe checkerboard distorted checkerboard image with image the same with distortion the same parameter distortion ofparameter of λ = and−×110 and (273,289) distortion centerdistortion center. Figure(273,289) 7b . isFigure the corrected 7b is the result corrected by the result proposed by the method,proposed method, where the redwhere points the representred points the represent corners. the Figure corners. 8 is Figurethe corresponding 8 is the corresponding corners image, corners image, where the redwhere asterisks the red represent asterisks the represent corners theof the corners distorted of the checkerboard distorted checkerboard image and theimage and the blue points bluerepresent points the represent corners theof the corners corrected of the checkerboard corrected checkerboard image. The image.distortion The dis- distortion dis- Δ=r 27.30 placement placementat the left topat the point left is top point is pixel.Δ=r 27.30 The curvaturepixel. The radiuscurvature of the radius red lineof the red line 3 2.2148× 10 3 formed by theformed left points by the onleft the points distorted on the checkerboard distorted checkerboard image is image is 2.2148 pixels,× 10and pixels, and Sensors 2021, 21, 209 5 9 of 16 1.4731 × 10 5 that of the thatblue ofline the corresponding blue line corresponding to the corrected to the checkerboard corrected checkerboard image is image is 1.4731 × 10 pixels. It meanspixels. that It meansthe circular that the arc circular is corrected arc is to corrected be straight. to be straight.

(a) (a) (b) (b) Figure 7. AnalyzedFigure 7.resultsAnalyzed of checkerboa results ofrd checkerboard image, where image, the red where points the represent red points the representcorners. (a the) corners. (a) Figure 7. Analyzed results of checkerboard image, where−6 the red points represent the corners. (a) Distorted checkerboard image with distortion parameter λ =110 −× and= distortion− ×−6 center−6 DistortedDistorted checkerboardcheckerboard image with distortion distortion parameter parameter λ =110 −× 1 and10 distortionand distortion center center (289, 273) ; (b) Corrected image. ((289,289, 273 273));(; (b) Corrected image.

Sensors 2021, 21, x FOR PEER REVIEW 10 of 17

Figure 8. Corners image with the red asterisks representing the corners of the distorted checker- FigureFigure 8.8. CornersCorners imageimage with with the the red red asterisks asterisks representing representing the the corners corners of of the the distorted distorted checkerboard checker- board image and the blue points representing the corners2.2148 × of10 the3 corrected checkerboard image. 1.4731 × 105 Theimageboard curvature image and the andradius blue the of pointsblue the pointsred representing line representing is the cornersthe corners pixels, of theand of the correctedthat corrected of the checkerboard blue checkerboard line is image. image. The pixels. curvature radius of the red line is 2.2148 × 103 pixels, and that of the blue line is 1.4731 × 105 pixels.

4.4. Experiment Experiment and and Results Results AnAn experimental experimental setup shownshown inin FigureFigure9 9is is employed employed to to perform perform distortion distortion correction correc- tionof a of wide-angle a wide-angle lens. lens. A flat-panel A flat-panel liquid liquid crystal crystal display display (LCD) (LCD) is used is toused display to display two sets two of setslongitudinal of longitudinal and transverse and transverse sinusoidal fringe-patternssinusoidal fringe-patterns with phase shift with of 0,phaseπ/2, πshift, 3π /of2 . 0,Theπππ / images 2, , 3 /of 2 .these The images fringe-patterns of these arefringe-patterns captured by aare charge-couple captured by devicea charge-couple (CCD) camera de- vicewith (CCD) a wide-angle camera lens.with Thea wide-angle LCD plane lens. can beThe regarded LCD plane as an can ideal be plane,regarded and as the an optical ideal plane,axis of and the camerathe optical is perpendicular axis of the camera to the LCDis perpendicular plane. The distorted to the LCD fringe-patterns plane. The distorted captured fringe-patternsby the camera arecaptured shown by in Figurethe camera 10. Figure are sh 11owna,b showin Figure the intensity 10. Figure distributions 11a,b show of the the intensitycentral row distributions and column of ofthe the central longitudinal row and and column transverse of the fringe-patternslongitudinal and with transverse a phase fringe-patternsshift of π/2, respectively. with a phase shift of π /2, respectively.

FigureFigure 9. 9. ExperimentalExperimental setup. setup. Liquid Liquid crystal crystal display display (LCD): (LCD):FunTV FunTVD49Y; wide-angle D49Y; wide-angle lens: Theia lens: MY125M,Theia MY125M, FOV137 FOV1370; charge-couple0; charge-couple device (CCD) device camera: (CCD) PointGrey camera: PointGreyCM3-U3-50S5M-CS, CM3-U3-50S5M-CS, 2048 × 24482048 pixels,× 2448 3.45 pixels, μm 3.45pixelµ size.m pixel size.

. Figure 10. Distorted fringe-patterns of size 2048 × 2448 pixels. The longitudinal fringe-patterns with a phase shift of 0,πππ / 2, , 3 / 2 are in the first row and the transverse with a phase shift of 0,πππ / 2, , 3 / 2 are in the second row.

Sensors 2021, 21, x FOR PEER REVIEW 10 of 17

3 5 The curvature radius of the red line is 2.2148 × 10 pixels, and that of the blue line is 1.4731 × 10 pixels.

4. Experiment and Results An experimental setup shown in Figure 9 is employed to perform distortion correc- tion of a wide-angle lens. A flat-panel liquid crystal display (LCD) is used to display two sets of longitudinal and transverse sinusoidal fringe-patterns with phase shift of 0,πππ / 2, , 3 / 2 . The images of these fringe-patterns are captured by a charge-couple de- vice (CCD) camera with a wide-angle lens. The LCD plane can be regarded as an ideal plane, and the optical axis of the camera is perpendicular to the LCD plane. The distorted fringe-patterns captured by the camera are shown in Figure 10. Figure 11a,b show the intensity distributions of the central row and column of the longitudinal and transverse fringe-patterns with a phase shift of π /2, respectively.

Sensors 2021, 21, 209 Figure 9. Experimental setup. Liquid crystal display (LCD): FunTV D49Y; wide-angle lens: Theia10 of 16 MY125M, FOV1370; charge-couple device (CCD) camera: PointGrey CM3-U3-50S5M-CS, 2048 × 2448 pixels, 3.45 μm pixel size.

. FigureFigure 10. 10. DistortedDistorted fringe-patterns fringe-patterns of of size size 2048 2048 × 2448× 2448 pixels. pixels. The Thelongitudinal longitudinal fringe-patterns fringe-patterns with a a phase phase shift shift of of 0,ππππ // 2,2, ,π 3, 3/ 2π / are2 are in the in the first first row row and and thethe transverse transverse with with a phase a phase shift shift of of 0,ππππ //2, 2, ,π 3, 3 /π 2 /2 are are in inthe the second second row. row.

(a) (b)

Figure 11. Intensity distribution of the distorted fringe-patters. (a) Intensity of the central row of the longitudinal fringe- pattern with a phase shift of π/2;(b) Intensity of the central column of the transverse fringe-pattern with a phase shift of π/2.

By performing the four-step phase-shifting analysis, the wrapped phases of the dis- torted longitudinal and transverse fringe-patterns are obtained as shown in Figure 12a,b. It shows that the phase of the distorted fringe-pattern can be demodulated well even when the intensity distribution of the fringe-patterns is low within some areas. The correspond- ing unwrapped phase can be obtained by performing the unwrapping algorithm. First, we perform the partial derivative operation of the phase of the distorted longitudinal and trans- verse fringe-pattern to get the instantaneous frequency fx and fy, respectively. The type of distortion is determined as barrel distortion for the instantaneous frequency increases along the radial direction. Then, the distortion center is decided at (1224, 1008) according to the distribution of the instantaneous frequency fx and fy. Considering the fluctuation of the analyzed phase caused by the noise in the experiment, we perform numerical linear fitting to calculate the phase of the undistorted fringe-pattern by employing the central 25 points of the phase of the distorted longitudinal and transverse fringe-pattern respec- tively, instead of performing the calculation by the instantaneous frequency and phase at the distortion center. The points of {ϕx(x0 − 12, y0), ··· ϕx(x0, y0), ··· ϕx(x0+12, y0)} and  ϕy(x0, y0 − 12), ··· ϕy(x0, y0), ··· ϕy(x0, y0+12) are employed for calculation. The fun- damental frequencies are fxo = fyo = 0.0133. The modulated phase distribution ∆ϕx(x, y) and ∆ϕy(x, y) are obtained as shown in Figure 13a,b respectively. Figure 13c,d show the numerical distortion displacement map ∆0x(m, n) and ∆0y(m, n) of size 2496 × 2984 pixels. Finally, by employing the inverse mapping with the bilinear interpolation, the image distortion correction can be achieved.

1

Sensors 2021, 21, x FOR PEER REVIEW 11 of 17

(a) (b) Figure 11. Intensity distribution of the distorted fringe-patters. (a) Intensity of the central row of the longitudinal fringe- pattern with a phase shift of π /2; (b) Intensity of the central column of the transverse fringe-pattern with a phase shift of π /2.

By performing the four-step phase-shifting analysis, the wrapped phases of the dis- torted longitudinal and transverse fringe-patterns are obtained as shown in Figure 12a,b. It shows that the phase of the distorted fringe-pattern can be demodulated well even when the intensity distribution of the fringe-patterns is low within some areas. The correspond- ing unwrapped phase can be obtained by performing the unwrapping algorithm. First, we perform the partial derivative operation of the phase of the distorted longitudinal and

transverse fringe-pattern to get the instantaneous frequency f x and f y , respectively. The type of distortion is determined as barrel distortion for the instantaneous frequency increases along the radial direction. Then, the distortion center is decided at (1224,1008)

according to the distribution of the instantaneous frequency f x and f y . Considering the fluctuation of the analyzed phase caused by the noise in the experiment, we perform numerical linear fitting to calculate the phase of the undistorted fringe-pattern by employ- ing the central 25 points of the phase of the distorted longitudinal and transverse fringe- pattern respectively, instead of performing the calculation by the instantaneous frequency and phase at the distortion center. The points of {}ϕϕϕ− xxx(xy00 12, ), ( xyxy 0000 , ), ( +12, ) and {ϕϕϕ(,xy− 12), (, xy ), (, xy +12)} are employed for calculation. The fundamen- yyy00 00 00 == Δϕ tal frequencies are ffxo yo 0.0133 . The modulated phase distribution x (,xy ) and Δϕ y (,xy ) are obtained as shown in Figure 13a,b respectively. Figure 13c,d show the nu- merical distortion displacement map Δ′xmn(,) and Δ′ymn(,) of size 2496 × 2984 pixels. Sensors 2021, 21, 209 11 of 16 Finally, by employing the inverse mapping with the bilinear interpolation, the image dis- tortion correction can be achieved.

Sensors 2021, 21, x FOR PEER REVIEW (a) (b) 12 of 17 Sensors 2021, 21, x FOR PEER REVIEW 12 of 17 Figure 12. WrappedFigure 12. phase.Wrapped (a) Wrapped phase. (a phase) Wrapped of distorted phase of longitudinal distorted longitudinal fringe-pattern; fringe-pattern; (b) (b) Wrapped Wrapped phasephase of ofdistorted distorted transv transverseerse fringe-pattern fringe-pattern transverse. transverse.

(a) (b) (c) (d) (a) (b) (c) (d)

Figure 13. Modulated phase and distortion displacement map. (a) Δϕ (,xy ) of size 2048× 2448 pixels; (b) Δϕ (,xy ) of Δϕx × Δϕy Figure 13. Modulated phasephase andand distortion distortion displacement displacement map. map. (a )(a∆) ϕx(xx,(,xyy) of ) sizeof size2048 2048× 2448 2448pixels; pixels; (b) ∆(bϕ) y(x, y (,)xyof ) size of size 2048× 2448 pixels; (c0) Δ′xmn(,) of size 2496× 2984 pixels; (0d) Δ′ymn(,) of size 2496× 2984 pixels. size2048 ×20482448× 2448 pixels; pixels; (c) ∆ (cx)( mΔ,′nxmn(,)) of size of 2496 size ×24962984× 2984 pixels; pixels; (d) ∆ (yd()m ,Δn′)ymn(,)of size of 2496 size× 29842496× pixels. 2984 pixels.

FigureFigure 14 14 shows shows the the experiment experiment of checkerboard of checkerboard images, images, where where Figure Figure 14a is 14 thea isdis- the Figure 14 shows the experiment of checkerboard images, where Figure 14a is the dis- torteddistorted image image and Figure and Figure 14b is14 theb is corrected the corrected image image with withthe red the points red points representing representing the torted image and Figure 14b is the corrected image with the red points representing the cornersthe corners by the by proposed the proposed model-free model-free method. method. Figure 15 Figure is the 15 corresponding is the corresponding corners im- cor- corners by the proposed model-free method. Figure 15 is the corresponding corners im- age,ners where image, the where red asterisks the red represent asterisks the represent corners the of the corners distorted of the checkerboard distorted checkerboard image and age, where the red asterisks represent the corners of the distorted checkerboard image and theimage blue andpoints the represent blue points the corners represent of the corrected corners of checkerboard the corrected image. checkerboard The distortion image. the blue points represent the corners of the corrected checkerboard image. The distortion displacementThe distortion at displacementthe left top point at theis leftΔ=r top198.94 point pixels. is ∆r The= 198.94curvaturepixels. radius The of curvature the red displacement at the left top point is Δ=r 198.94 pixels. The curvature radius of the red radius of the red line formed by the left points on the distorted checkerboard× 3 image is line formed by the left points on the distorted checkerboard image is 3.2457 × 10 3 pixels, line formed by3 the left points on the distorted checkerboard image is 3.2457 10 pixels, and3.2457 that× of10 thepixels, blue and line that corresponding of the blue line to corresponding the corrected to thecheckerboard corrected checkerboard image is and that of the blue line corresponding to the corrected checkerboard image is 4 × 4 6.8656image × is 106.8656 4 pixels. The10 pixelslarger. the The curvature larger the radius, curvature the radius, closer the closercurve is the to curve the straight is to the 6.8656 × 10 pixels. The larger the curvature radius, the closer the curve is to the straight line,straight i.e., the line, better i.e., thethe bettercorrection the correction effect. effect. line, i.e., the better the correction effect.

(a) (b) (c) (a) (b) (c) FigureFigure 14. 14. ExperimentalExperimental results results of of checkerboard checkerboard images. images. (a ()a Distorted) Distorted checkerboard checkerboard image image of of size size 20482048× × 24482448 pixels;pixels; (b ()b ) Figure 14. Experimental results of checkerboard images. (a) Distorted checkerboard image of size 2048× 2448 pixels; (b) Corrected image of size 2496×× 2984 pixels with the red points representing the corners by the proposed model-free Corrected imageimage ofof size size2496 2496×2984 2984pixels pixels with with the the red red points points representing representing the corners the corners by the by proposed the proposed model-free model-free method. method. The pixel count of the square within the central green rectangular region is 49,033 pixels, and that of the square method.The pixel The count pixel of thecount square of the within square the within central the green central rectangular green rectangular region is49,033 region pixels, is 49,033 and pixels, that of and the that square of the within square the within the external blue rectangular region is 48,460 pixels. (c) Corrected image by plumb-line method. The curvature external blue rectangular region is 48,460 pixels. (c) Corrected image by plumb-line method. The curvature radius of the red within the external blue rectangular4 region is 48,460 pixels. (c) Corrected image by plumb-line method. The curvature radius of the red line4 is 3.2821 × 10 4 pixels. The pixel count of the square within the central green rectangular region is radiusline is 3.2821of the red× 10 linepixels. is 3.2821 The × pixel 10 pixels. count ofThe the pixel square count within of the the square central with greenin the rectangular central green region rectangular is 46988 pixels, region and is 46988 pixels, and that of the square within the external blue rectangular region is 57,490 pixels. that46988 of pixels, the square and that within of the the square external within bluerectangular the external region blue rectangular is 57,490 pixels. region is 57,490 pixels.

Figure 15. Corners image with the red asterisks representing the corners of the distorted checker- Figure 15. Corners image with the red asterisks representing the corners of the distorted checker- board image and the blue points representing the corners of the corrected checkerboard image board image and the blue points representing the corners of the corrected checkerboard image × 3 corresponding to those shown in Figure 14a,b. The curvature radius of the red line is 3.24573.2457 ×10 10 3 corresponding to those shown in Figure 14a,b.4 The curvature radius of the red line is pixels, and that of the blue line is 6.8656 × 10 4pixels. pixels, and that of the blue line is 6.8656 × 10 pixels.

Sensors 2021, 21, x FOR PEER REVIEW 12 of 17

(a) (b) (c) (d)

Δϕ × Δϕ Figure 13. Modulated phase and distortion displacement map. (a) x (,xy ) of size 2048 2448 pixels; (b) y (,xy ) of

size 2048× 2448 pixels; (c) Δ′xmn(,) of size 2496× 2984 pixels; (d) Δ′ymn(,) of size 2496× 2984 pixels.

Figure 14 shows the experiment of checkerboard images, where Figure 14a is the dis- torted image and Figure 14b is the corrected image with the red points representing the corners by the proposed model-free method. Figure 15 is the corresponding corners im- age, where the red asterisks represent the corners of the distorted checkerboard image and the blue points represent the corners of the corrected checkerboard image. The distortion displacement at the left top point is Δ=r 198.94 pixels. The curvature radius of the red line formed by the left points on the distorted checkerboard image is 3.2457 × 103 pixels, and that of the blue line corresponding to the corrected checkerboard image is 6.8656 × 104 pixels. The larger the curvature radius, the closer the curve is to the straight line, i.e., the better the correction effect.

(a) (b) (c)

Figure 14. Experimental results of checkerboard images. (a) Distorted checkerboard image of size 2048× 2448 pixels; (b) Corrected image of size 2496× 2984 pixels with the red points representing the corners by the proposed model-free method. The pixel count of the square within the central green rectangular region is 49,033 pixels, and that of the square Sensorswithin2021, 21 the, 209 external blue rectangular region is 48,460 pixels. (c) Corrected image by plumb-line method. The curvature12 of 16 4 radius of the red line is 3.2821 × 10 pixels. The pixel count of the square within the central green rectangular region is 46988 pixels, and that of the square within the external blue rectangular region is 57,490 pixels.

FigureFigure 15.15.Corners Corners image image with with the the red red asterisks asterisks representing representing the the corners corners ofthe of the distorted distorted checkerboard checker- board image and the blue points representing the corners of the corrected checkerboard image image and the blue points representing the corners of the corrected checkerboard image correspond- × 3 ingcorresponding to those shown to those in Figure shown 14 a,b.in Figure The curvature 14a,b. The radius curvature of the radius red line of isthe3.2457 red line× is10 33.2457pixels, 10 and × 4 thatpixels, of theand blue that line of the is 6.8656blue line× is10 46.8656pixels. 10 pixels.

Firstly, we perform distortion correction by the plumb-line method with a single parameter division model [12] for comparison. The red line shown in Figure 15 is em-

ployed for the estimation of the distortion parameter. The estimated λ is −2.1469 × 10−7. According to Equation (11), the distortion displacement can be numerically calculated. The corrected image is shown in Figure 14c, where the red points represent the corners. The curvature radius of the red line is 3.2821 × 104 pixels, compared with the corresponding curvature radius of 6.8656 × 104 pixels by the proposed method. Moreover, the square within the central region is not the same size as the square at the external region in the corrected image as shown in Figure 14c. We select two white squares at the central and external region to show the difference. The pixel count of the square within the central green rectangular region is 46,988 pixels compared that of the square within the external blue rectangular region being 57,490 pixels. By the proposed method, the pixel counts of these two corresponding squares are 49,033 and 48,460 pixels as shown in Figure 14b. It means that the estimated distortion lambda of the single parameter division model by this characteristic circular arc does not fit for the whole image correction. On the other hand, in order to make comparison with the method employing some distortion models, we employ the method in [32] for distortion displacement detection. We take the distortion center point as the origin of the coordinate and perform the numerical curve fitting according to the discrete distortion displacement from the 1224th to the 2556th point at the 1008th row by three different distortion models, which are an even-order polynomial model with one and two distortion parameters and single parameter division model. The even-order polynomial model [16] is described as:

2 4 6 ru = rd(1 + λ1rd + λ2rd + λ3rd + ··· ). (12)

Figure 16 shows the curve fitting results, where the black line is the analyzed radial distortion displacement, the green line is the curve fitting result of the even-order polyno-  −7 mial model with λ1 = 1.6186 × 10 , the red line is that of the even-order polynomial  −8 −13 model with λ1 = −2.1680 × 10 , λ2 = 2.2025 × 10 , and the blue line is that of the single parameter division model with λ = −1.7198 × 10−7 . We can find that the fitting result by the even-order polynomial model with two distortion parameters is better. Figure 17 shows the corresponding correction results of the checkerboard image re- spectively, where Figure 17a,b are by the even-order polynomial model with one and two distortion parameters, respectively, and Figure 17c is by the single parameter division model. The curvature radius of the line formed by the left points on the corrected checker- board image is 6.9263 × 103 pixels, 2.3763 × 104 pixels, and 1.1747 × 104 pixels respectively compared with the corresponding curvature radius of 6.8656 × 104 pixels by the proposed method. The pixel counts of the above-mentioned squares at the central and external regions in the corrected image by the even-order polynomial model with two distortion parameters are 48,094 and 48,694 pixels, as shown in Figure 17b. We can find that the curve Sensors 2021, 21, 209 13 of 16

fitting results rely on the distortion model greatly. Distortion determination may fail using an unsuitable model or by estimation of too few distortion parameters. However, the more distortion parameters there are, the more complicated the solution of the reverse process.

Sensors 2021, 21, x FOR PEER REVIEW 14 of 17 Sensors 2021, 21, x FOR PEER REVIEW 14 of 17

× 3 × 4 × 4 checkerboardcheckerboard image is 6.9263image is 10 6.9263 pixels,× 10 32.3763 pixels, 102.3763 pixels,× 104 andpixels, 1.1747 and 1.1747 10 pixels× 104 re-pixels re- 4 × 4 spectively comparedspectively comparedwith the correspondingwith the corresponding curvature curvature radius of radius 6.8656 of 6.8656 10 pixels× 10 bypixels by the proposedthe method. proposed The method. pixel countsThe pixel of countsthe above-mentioned of the above-mentioned squares squaresat the central at the centraland and external regionsexternal in regionsthe corrected in the correctedimage by image the even-order by the even-order polynomial polynomial model modelwith two with two distortion parametersdistortion parameters are 48,094 ar ande 48,094 48,694 and pixels, 48,694 as pixels, shown as in shown Figure in 17b.Figure We 17b. can We find can find that the curvethat fitting the curve results fitting rely results on the rely dist onortion the dist modelortion greatly. model Digreatly.stortion Distortion determination determination may fail usingmay anfail unsuitable using an unsuitable model or modelby estimation or by estimation of too few of toodistortion few distortion parameters. parameters. However, theHowever, more distortion the more distortionparameters parameters there are, there the more are, the complicated more complicated the solution the solution of of Figure 16. Radial distortion displacement and curve fitting results. the reverse theprocess. reverse process.

(a) (b) (c) (a) (b) (c) Figure 17.FigureExperimental 17. Experimental results results of checkerboard of checkerboard images. images. ((aa)) Corrected image image by bya one-parameter a one-parameter even-order even-order polynomial polynomial Figure 17. Experimental results of checkerboard images. (a) Corrected× 3 image by a one-parameter even-order polynomial model with a green line curvature radius of 6.9263 103 pixels. (b) Corrected image by a two-parameter even-order model with a green line curvature radius of 6.92633 × 10 pixels. (b) Corrected image by a two-parameter even-order model with a green line curvature radius of 6.9263× 10 pixels. (b) Corrected× 4 image by a two-parameter even-order polynomial model with a red line curvature radius of 2.3763 10 4 pixels. The pixel count of the square within the central polynomial model with a red line curvature radius of 2.37634 × 10 pixels. The pixel count of the square within the central polynomial modelgreen with rectangular a red line region curvature is 48,094 radius pixels, of an 2.3763d that of× 10the square pixels. within The pixel the external count of blue the rectangularsquare within region the iscentral 48,694 pixels. 4 greengreen rectangular rectangular(c) Corrected region region is image48,094 is by pixels, 48,094 a single an pixels,d pa thatrameter andof the thatdivision square of the modelwithin square with the externala within blue line theblue curvature external rectangular radius blue region of rectangular 1.1747 is 48,694× 10 regionpixels. pixels. is 48,694 pixels. 4 4 (c) Corrected(c) Corrected image image by a single by a singleparameter parameter division divisionmodel with model a blue with line acurvature blue line radius curvature of 1.1747 radius× 10 of 1.1747pixels. × 10 pixels. Furthermore, the indoor and outdoor scenes are also employed for the experiment to Furthermore,showFurthermore, the the practicality indoor the and of indoor theoutdoor proposed and scenes outdoor method are also. scenesFigures employed are18 and also for 19 the employedshow experiment the corresponding for to the experiment show theto showpracticalitydistorted the practicalityand of the corrected proposed of image, the method proposed respectively.. Figures method. The 18 andexperimental Figures19 show 18 theresults and corresponding show19 show that the dis- correspond- distorteding and distortedtorted corrected images and image, achieve corrected respectively. distortion image, Thecorrecti respectively. experimentalon effectively The results experimentalby theshow proposed that the results model-freedis- show that the torted distortedimagesmethod. achieve images distortion achieve correcti distortionon effectively correction by effectively the proposed by themodel-free proposed model-free method.method.

(a) (b) Figure 18. Experimental results of the indoor scene. (a) Distorted image of size 2048 × 2448 pixels; (b) Corrected(a) image of size 2496 × 2984 pixels. (b)

Figure Figure18. Experimental 18. Experimental results of the results indoor of scene. the indoor (a) Distorted scene. image (a) Distorted of size 2048 image × 2448 of sizepixels; 2048 × 2448 pixels; (b) Corrected image of size 2496 × 2984 pixels. (b) Corrected image of size 2496 × 2984 pixels.

Sensors 2021, 21, 209 14 of 16 Sensors 2021, 21, x FOR PEER REVIEW 15 of 17

(a) (b)

Figure Figure19. Experimental 19. Experimental results of resultsthe outdoor of the scene. outdoor (a) Distorted scene. (imagea) Distorted of size image2048× 2448 of size pix-2048 × 2448 pixels; els; (b) Corrected image of size 2496× 2984 pixels. (b) Corrected image of size 2496 × 2984 pixels. 5. Discussion5. Discussion ExperimentalExperimental correction correction results of resultsthe checkerboard of the checkerboard image by different image methods by different are methods are given givenfor the forcomparison the comparison with the withproposed the proposedmodel-free model-freecorrection method. correction The method.original The original × 3 straightstraight line is distorted line is distorted into a circular into a arc circular with a arccurvature with a radius curvature of 3.2457 radius 10 of pixels,3.2457 × 103 pixels, as shownas shown in Figure in Figure15. By the15 .plumb-line By the plumb-line method with method the single with parameter the single division parameter division model, the phase analysis method by the even-order polynomial model with one and two model, the phase analysis method by the even-order polynomial model with one and distortion parameters and the single parameter division model, and the proposed model- free method,two distortion the curvature parameters radiuses of and the the corresponding single parameter corrected division lines on model,the corrected and the proposed checkerboardmodel-free images method, are 3.2821 the× 10curvature4 , 6.9263× radiuses 103 , 2.3763 of× 10 the4 ,1.1747 corresponding× 104 , and 6.8656 corrected× 104 lines on the 4 3 4 4 pixels,corrected respectively. checkerboard The circular arc images is correc areted3.2821 to be straighter× 10 , 6.9263 by the× proposed10 ,2.3763 method.× 10 ,1.1747 × 10 , 4 It meansand that6.8656 the proposed× 10 pixels, method respectively. provides a su Theperior circular result, and arc it is shows corrected that distortion to be straighter by the determinationproposed may method. fail using It an means unsuitable that themodel proposed firstly. Moreover, method providesfrom the comparison a superior result, and it of the showscurvature that radius distortion of the determinationcorresponding corrected may fail usinglines, it an seems unsuitable that the model corrected firstly. Moreover, result fromof the the plumb-line comparison method of theis better curvature than that radius by the of thephase corresponding analysis method corrected by the lines, it seems even-orderthat thepolynomial corrected model result with of thetwo plumb-line distortion parameters. method is The better reason than for that this by is thethat phase analysis the distortionmethod parameter by the even-order estimation polynomial by the plumb-line model method with two is performed distortion by parameters. the dis- The reason torted forcircular this isarc that at this the position. distortion However, parameter for the estimation corrected bycheckerboard the plumb-line image method by the is performed plumb-line method, the square within the central region of size 46,988 pixels is not the by the distorted circular arc at this position. However, for the corrected checkerboard same as the square at the external region of size 57,490 pixels. It means that the estimated distortionimage parameter by the plumb-linedoes not fit for method, the whole the image square correction. within the Therefore, central region according of size to 46,988 pixels the analysisis not result, the same we should as the take square more characteristic at the external points region and lines of size or some 57,490 other pixels. more It means that complicatedthe estimated algorithm distortion or distortion parameter models in doesto account. not fit forHowever, the whole the more image distortion correction. Therefore, parametersaccording there are, to the the analysis more complicated result, we th shoulde solution take of the more reverse characteristic process. For points the and lines or proposedsome model-free other more method, complicated all points algorithm of the distorted or distortion fringe-pattern models are intoemployed account. for However, the the establishmentmore distortion of the distor parameterstion displacement there are, map, the morewhich complicateddemands none the of the solution distor- of the reverse tion model.process. So, Forthe image the proposed achieve distortion model-free correction method, point all points by point of with the distorted a more effec- fringe-pattern are tive andemployed satisfactory for result. the establishment of the distortion displacement map, which demands none Inof the the experiment distortion of model. distortion So, displaceme the imagent achieve measurement, distortion the errors correction caused point by the by point with a nonidealmore LCD effective plane and and imperfect satisfactory perpendicula result. r arrangement of the optical axis of cam- era and the LCD plane should be considered. In the experiment of distortion displacement measurement, the errors caused by the 6. Conclusionsnonideal LCD plane and imperfect perpendicular arrangement of the optical axis of camera and the LCD plane should be considered. In this paper, a model-free lens distortion correction method based on the distortion displacement6. Conclusions map by the phase analysis of fringe-patterns is proposed. For the image dis- tortion correction, the most important thing is to decide the distortion displacement. So, the mathematicalIn this relationship paper, a model-free of the distorti lenson distortiondisplacement correction and the modulated method based phase onof the distortion the fringe-patterndisplacement is established map by thein theory phase firs analysistly. Then, of two fringe-patterns sets of longitudinal is proposed. and trans- For the image verse distortionfringe-patterns correction, are employed the most for important phase demodulation thing is to decideanalysis theto distortionobtain the displacement. So, the mathematical relationship of the distortion displacement and the modulated phase of the fringe-pattern is established in theory firstly. Then, two sets of longitudinal and transverse fringe-patterns are employed for phase demodulation analysis to obtain the

distortion displacement ∆x and ∆y respectively by the phase-shifting method. The distor- tion displacement map can be determined point by point for the whole distorted image to achieve distortion correction. It would be effective even when the circular symmetry Sensors 2021, 21, 209 15 of 16

condition is not satisfied. Moreover, it detects the radial distortion type and the distortion center automatically according to the instantaneous frequency, which is important in ob- taining optimal result. The correction results of the numerical simulation, experiments, and comparison show the effectiveness and superiority of the proposed method. There are some prospects of our further works. Firstly, the relationship of the mod- ulated phase and the distortion displacement described by the proposed method would still hold for the mix distortion with the radial and tangential type. However, if the tan- gential distortion is severe, the distortion center would not be the corresponding position where the minimum or maximum instantaneous frequency appears. So, how to decide the distortion center automatically in this case should be considered. Secondly, the optimal frequency of measuring fringe-patterns for accurate modulated phase analysis should be considered. Thirdly, the application of the proposed method should be implemented.

Author Contributions: Conceptualization, J.Z. and J.W.; software, J.W., S.M. and P.Q.; investigation, W.Z.; writing, J.W. and J.Z.; supervision, J.Z. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by the National Natural Science Foundation of China, grant number 61875074 and 61971201; the Natural Science Foundation of Guangdong Province, grant number 2018A030313912; the Open Fund of the Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications (Jinan University). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Data sharing not applicable. Conflicts of Interest: The authors declare no conflict of interest.

References 1. Collins, T.; Bartoli, A. Planar structure-from-motion with affine camera models: Closed-form solutions, ambiguities and degeneracy analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1237–1255. [CrossRef] 2. Guan, H.; Smith, W.A.P. Structure-from-motion in spherical video using the von Mises-Fisher distribution. IEEE Trans. Image Process. 2017, 26, 711–723. [CrossRef] 3. Herrera, J.L.; Del-Blanco, C.R.; Garcia, N. Automatic depth extraction from 2D images using a cluster-based learning framework. IEEE Trans. Image Process. 2018, 27, 3288–3299. [CrossRef] 4. Wang, Y.; Deng, W. Generative model with coordinate metric learning for object recognition based on 3D models. IEEE Trans. Image Process. 2018, 27, 5813–5826. [CrossRef] 5. Devernay, F.; Faugeras, O. Straight lines have to be straight. Mach. Vis. Appl. 2001, 13, 14–24. [CrossRef] 6. Ahmed, M.; Farag, A. Non-metric calibration of camera lens distortion: Differential methods and robust estimation. IEEE Trans. Image Process. 2005, 14, 1215–1230. [CrossRef] 7. Cai, B.; Wang, Y.; Wu, J.; Wang, M.; Li, F.; Ma, M.; Chen, X.; Wang, K. An effective method for camera calibration in defocus scene with circular gratings. Opt. Lasers Eng. 2019, 114, 44–49. [CrossRef] 8. Swaminathan, R.; Nayar, S. Non metric calibration of wide-angle lenses and polycameras. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1172–1178. [CrossRef] 9. Hartley, R.; Kang, S.B. Parameter-free radial distortion correction with center of distortion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1309–1321. [CrossRef] 10. Kukelova, Z.; Pajdla, T. A minimal solution to radial distortion autocalibration. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2410–2422. [CrossRef] 11. Gao, Y.; Lin, C.; Zhao, Y.; Wang, X.; Wei, S.; Huang, Q. 3-D surround view for advanced driver assistance systems. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 19, 320–328. [CrossRef] 12. Bukhari, F.; Dailey, M.N. Automatic radial distortion estimation from a single image. J. Math. Imaging Vis. 2013, 45, 31–45. [CrossRef] 13. Alemán-Flores, M.; Alvarez, L.; Gómez, L.; Santana-Cedrés, D. Automatic lens distortion correction using one-parameter division models. Image Process. Line 2014, 4, 327–343. [CrossRef] 14. Santana-Cedrés, D.; Gómez, L.; Alemán-Flores, M. An iterative optimization algorithm for lens distortion correction using two-parameter models. Image Process. Line 2016, 5, 326–364. [CrossRef] Sensors 2021, 21, 209 16 of 16

15. Li, L.; Liu, W.; Xing, W. Robust radial distortion correction from a single image. In Proceedings of the 2017 IEEE 15th Interna- tional Conference on Dependable, Autonomic and Secure Computing, 15th International Conference on Pervasive Intelligence and Computing, 3rd International Conference on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech), Orlando, FL, USA, 6–10 November 2017; pp. 766–772. 16. Brown, D.C. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–886. 17. Fitzgibbon, A.W. Simultaneous linear estimation of multiple view geometry and lens distortion. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; pp. I125–I132. 18. Kannala, J.; Brandt, S. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1335–1340. [CrossRef] 19. Rong, J.; Huang, S.; Shang, Z.; Ying, X. Radial lens distortion correction using convolutional neural networks trained with synthesized images. In Asian Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 35–49. 20. Yin, X.; Wang, X.; Yu, J.; Zhang, M.; Fua, P.; Tao, D. FishEyeRecNet: A multi-context collaborative deep network for fisheye image rectification. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 475–490. 21. Xue, Z.; Xue, N.; Xia, G.; Shen, W. Learning to Calibrate Straight Lines for Fisheye Image Rectification. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 1643–1651. 22. Liao, K.; Lin, C.; Zhao, Y.; Xu, M. Model-Free Distortion Rectification Framework Bridged by Distortion Distribution Map. IEEE Trans. Image Process. 2020, 29, 3707–3717. [CrossRef] 23. Li, X.; Zhang, B.; Sander, P.V.; Liao, J. Blind Geometric Distortion Correction on Images through Deep Learning. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 4850–4859. 24. Munjy, R.A.H. Self-Calibration using the finite element approach. Photogramm. Eng. Remote Sens. 1986, 52, 411–418. 25. Tecklenburg, W.; Luhmann, T.; Heidi, H. Camera Modelling with Image-variant Parameters and Finite Elements. In Optical 3-D Measurement Techniques V; Wichmann Verlag: Heidelberg, Germany, 2001. 26. Gioi, R.G.V.; Monasse, P.; Morel, J.M.; Tang, Z. Towards high-precision lens distortion correction. In Proceedings of the International Conference on Image Processing, ICIP 2010, Hong Kong, China, 12–15 September 2010; pp. 4237–4240. 27. Takeda, M.; Mutoh, K. Fourier transform profilometry for the automatic measurement of 3-D object shapes. Appl. Opt. 1983, 22, 3977–3982. [CrossRef] 28. Zhong, J.; Weng, J. Phase retrieval of optical fringe patterns from the ridge of a wavelet transform. Opt. Lett. 2005, 30, 2560–2562. [CrossRef] 29. Weng, J.; Zhong, J.; Hu, C. Digital reconstruction based on angular spectrum diffraction with the ridge of wavelet transform in holographic phase-contrast microscopy. Opt. Express 2008, 16, 21971–21981. [CrossRef][PubMed] 30. Braeuer-Burchardt, C. Correcting lens distortion in 3D measuring systems using fringe projection. Proc. SPIE Int. Soc. Opt. Eng. 2005, 5962, 155–165. 31. Li, K.; Bu, J.; Zhang, D. Lens distortion elimination for improving measurement accuracy of fringe projection profilometry. Opt. Lasers Eng. 2016, 85, 53–64. [CrossRef] 32. Zhou, W.; Weng, J.; Peng, J.; Zhong, J. Wide-angle lenses distortion calibration using phase demodulation of phase-shifting fringe-patterns. Infrared Laser Eng. 2020, 49, 20200039. [CrossRef] 33. Zuo, C.; Feng, S.; Huang, L.; Tao, T.; Yin, W.; Chen, Q. Phase shifting algorithms for fringe projection profilometry: A review. Opt. Lasers Eng. 2018, 109, 23–59. [CrossRef] 34. Ghiglia, D.C.; Pritt, M.D. Two-Dimensional Phase Unwrapping: Theory, Algorithms, and Software; Wiley: Hoboken, NJ, USA, 1998; pp. 181–215.