<<

sensors

Article An Improved Calibration Method for Photonic Device Solid-State Array Lidars Based on Electrical Analog Delay

Xuanquan Wang, Ping Song * and Wuyang Zhang

Key Laboratory of Biomimetic Robots and Systems (Ministry of Education), Beijing Institute of Technology, Beijing 100081, China; [email protected] (X.W.); [email protected] (W.Z.) * Correspondence: [email protected]; Tel.: +86-136-6136-5650

 Received: 15 November 2020; Accepted: 16 December 2020; Published: 20 December 2020 

Abstract: As a typical application of indirect-time-of-flight (ToF) technology, photonic mixer device (PMD) solid-state array Lidar has gained rapid development in recent years. With the advantages of high resolution, frame rate and accuracy, the equipment is widely used in target recognition, simultaneous localization and mapping (SLAM), industrial inspection, etc. The PMD Lidar is vulnerable to several factors such as ambient light, temperature and the target feature. To eliminate the impact of such factors, a proper calibration is needed. However, the conventional calibration methods need to change several distances in large areas, which result in low efficiency and low accuracy. To address the problems, this paper presents an improved calibration method based on electrical analog delay. The method firstly eliminates the lens distortion using a self-adaptive interpolation algorithm, meanwhile it calibrates the grayscale image using an integral time simulating based method. Then, the grayscale image is used to estimate the parameters of ambient light compensation in depth calibration. Finally, by combining four types of compensation, the method effectively improves the performance of depth calibration. Through several experiments, the proposed method is more adaptive to multiscenes with targets of different reflectivities, which significantly improves the ranging accuracy and adaptability of PMD Lidar.

Keywords: photonic mixer device; PMD solid-state array Lidar; electrical analog delay; joint calibration algorithm; self-adaptive grayscale correlation; depth calibration method

1. Introduction Three-dimensional information acquisition has gained extensive attention in the field of computer vision, robot navigation, human–computer interaction, automatic driving, etc. [1]. Generally, the methods to obtain three-dimensional information mainly include stereo vision [2,3], structural light [4], single-pixel 3D imaging [5] and time-of-flight (ToF) [6]. Stereo vision needs advanced matching algorithm to obtain accurate depth information, which is vulnerable to ambient light. Structural light needs projection optimization compensation, which requires high performance of the processing system. Compared with the two methods above, the ToF sensor system utilizes active infrared laser to achieve depth information acquisition, which has the advantages of low cost, high frame frequency and high reliability [7,8]. Photonic mixer device (PMD) solid-state array Lidar, as one typical kind of the ToF sensor system, is widely used in computer vision [9,10]. However, there inevitably exist several errors sources (such as ambient light, integration time, temperature drift and reflectivity), which reduce the performance of the ToF sensor significantly. Hence, the equipment needs to be properly calibrated to achieve reliable depth information acquisition [11].

Sensors 2020, 20, 7329; doi:10.3390/s20247329 www.mdpi.com/journal/sensors Sensors 2020, 20, 7329 2 of 22

Several works have been done on PMD Lidar calibration. Lindner [11–14] put forward a calibration approach, which combined the overall intrinsic, distance and reflectivity related error calibration. Compared with the other approaches, the calibration provided significant contribution to the reduction of calibration data. Kahlman [15,16] presented a parameter based calibration approach, which considered multiple error sources including integration time, reflectivity, distance and temperature. The accuracy was improved to 10 mm at the distance of 2.5 m after calibration. Steiger [17] discussed the influence of internal factors and environmental factors. Then the effect of these factors was compensated by experiments. However, the errors were above the uncertainties specified by the manufacturer even after calibration. Swadzba [18] put forward a calibration algorithm based on stepwise optimization and the particle filter framework. The experimental results showed the accuracy of the method was higher than the traditional calibration method, while the efficiency was decreased. Schiller [19] discussed a joint calibration method based on PMD camera and standard 2D CCD camera. Results showed the internal camera parameters were estimated more precisely. In addition, the limitations of the small field-of-view were overcome by the method. Fuchs [20,21] presented a calibration process for the ToF camera with respect to the intrinsic parameters, the depth measurement distortion and the pose of the camera relative to a robot’s end effector. Chiabrando [22] performed two aspects of the calibration: distance calibration and photogrammetric calibration. For distance calibration, they reduced the distance error to 15 mm in the range of 1.5–4 m. For photogrammetric ± calibration, they verified the stability of the estimated camera internal parameters. Christian [23] presented a calibration approach based on the depth and reflectance image of a planar checkerboard. The method improved the efficiency and the accuracy for the calibration of the focal length and 3D pose of the camera. However, the depth accuracy was not improved. Kuhnert [24] raised a joint calibration method based on two types of 3D cameras, PMD camera and stereo camera system, to improve the range accuracy by using one camera to compensate the other one. Schmidt [25] proposed a dynamic calibration method, which can be executed on systems with limited resources. Huang [26] raised an integration time auto adaptation method based on amplitude data, which makes each pixel obtain the depth information under the best conditions. Meanwhile the Gaussian process regression model was utilized to calibrate the depth errors. He [27] analyzed the influence of several external distractions (including material, color, distance, lighting, etc.) and proposed an error correction method based on the particle filter-support vector machine (PF-SVM). To sum up, most research on the ToF camera calibration focus on the parameters related with measurement accuracy, such as integration time, pixel related error or depth data distortion [28–35]. These methods generally need to place the calibration plate in different distances to acquire depth compensation look-up table (LUT), which require a significant amount of work. In addition, the calibration plate needs to be placed manually. Even a slight change of the angle or the location will introduce extra measuring error of several millimeters. Thus, the process is vulnerable to the human factor. Others [36–41] obtain depth compensation data by changing the attitude of ToF camera with external devices, which need complex algorithms to fuse the multisource information. Consequently, there still exist unresolved issues such as a heavy workload, complex calculation requirement and serious human disturbance. A simpler calibration method is needed to improve the applicability, accuracy and convenience of ToF cameras. To deal with the abovementioned challenges, we [42] proposed a calibration method for PMD solid-state array Lidar based on a black-box calibration device and an electrical analog delay method in the previous work. The method solved part of the problems in traditional calibration methods, such as low efficiency, low accuracy and serious human disturbance. However, some factors still have not taken into account, such as temperature drift, targets with different reflectivity and disturbance of ambient light, which could bring extra errors during the application of PMD solid-state array Lidar. For this reason, this paper improved the calibration setting and the method, and the main contributions are as follows: Sensors 2020, 20, 7329 3 of 22

(1). This paper proposed a self-adaptive grayscale correlation based depth calibration method (SA-GCDCM) for PMD solid-state array Lidar. Due to its special structure, the PMD Lidar has the ability to obtain a grayscale image and depth image simultaneously. Meanwhile, the amplitude of grayscale image has a close relationship with ambient light. Based on this, the grayscale image was used to estimate the parameters of ambient light compensation in depth calibration in this method. Through SA-GCDCM, the disturbance of ambient light could be effectively eliminated. Traditional joint calibration methods always need an extra RGB camera to cooperate with the ToF camera. The inconformity of the parameters of the two cameras, such as the image resolution and the field of view, can introduce extra errors to the system, leading to low calibration accuracy and efficiency. Compared with the traditional methods, this method has no requirement of coordinate transformation and feature matching, leading to better data consistency and self-adaptability. (2). This paper proposed a grayscale calibration method based on integration time simulating. Firstly, the raw grayscale images were acquired under multiple ambient light levels through setting the integration time in several values. Then the spatial variances were calculated from the images to estimate the dark signal non-uniformity (DSNU) and photo response non-uniformity (PRNU). At last the influence of DSNU and PRNU were eliminated by a correction algorithm. (3). Based on the electrical analog delay method, a comprehensive, multiscene adaptive multifactor calibration model was established through combining the SA-GCDCM with raw distance demodulation compensation, distance response non-uniformity (DRNU) compensation and temperature drift compensation. Compared with the prior methods, the established model is more adaptive to multiscenes with targets of different reflectivities, which significantly improves the ranging accuracy and adaptability of PMD Lidar.

The article is structured as follows: an introduction of working principle of PMD Lidar given in Section2, along with a discussion of known error sources such as integration time, temperature and DRNU. The combined calibration method is presented in Section3. Section4 introduces the experiments and performs the discussion. Finally, a short summary is given in Section5.

2. System Introduction

2.1. Principle of PMD Solid-State Array Lidar The PMD solid-state array Lidar mainly includes three parts: the emitting unit, the receiving unit and the processing unit. The emitting unit is composed of four vertical-cavity -emitting laser (VCSEL), the driver, the delay-locked loop (DLL) and the modulator. Compared with the light emitting diode (LED), VCSEL has attracted attention in recent ToF lidar research [43–45] because of its lower power consumption, higher speed and higher reliability [46]. The receiving unit is composed of the lens, the sensor, the demodulator, the A/D converter and the sequence controller. The processing unit is a digital signal processing (DSP) controller. The fundamental principle [47,48] of PMD solid-state array Lidar is illustrated in Figure1. The continuous modulated near infrared (NIR) laser is generated and emitted by the emitting unit. After reflecting at the surface of the objects, the laser is received by the receiving unit. The optical signal is converted into an electrical signal in the receiving unit. Then the electrical signal is transmitted to the processing unit, which calculates the distance data by demodulating the phase delay between the emitted and the detected signal. Finally, three types of images, point cloud, grayscale and depth images, can be output from the DSP controller through data processing. Sensors 2020, 20, FOR PEER REVIEW 4 of 22 Sensors 2020, 20, 7329 4 of 22 Sensors 2020, 20, x FOR PEER REVIEW 4 of 22 Solid State Array LiDAR Processing Unit Emitting Unit Solid State Array LiDAR DLL Driver Modulator Processing Unit Emittingm+n Unit Point cloud Driver DLLClock Modulator

m+n Output Point cloud DemodulatorClock Data Depth VCSEL I2C Output

Demodulator Data Depth VCSEL I2C Sensor Grayscale Image Correction Sensor Data Depth Grayscale Image Lens Correction 3D Object Data Depth Lens Controller Sequence 3D Object Receiving Unit Sequence Controller Sequence High-speed Parallel Calculation Depth Data Depth Depth Image Receiving Unit A/D Converter Interface Depth Image Raw Data DSP High-speed Parallel Calculation ControllerData Depth Depth Image A/D Converter Interface Depth Image Raw Data DSP Controller Figure 1. The fundamental principle of the photonic mixer device (PMD) solid-state array Lidar. Figure 1. The fundamental principle of the photonic mixer device (PMD) solid-state array Lidar. Figure 1. The fundamental principle of the photonic mixer device (PMD) solid-state array Lidar. Signal demodulation is the key step during the working process of the PMD solid-state array Signal demodulation is the key step during the working process of the PMD solid-state array Lidar, Lidar,Signal which demodulation is shown in Figureis the key 2. Two step different during thcapacitorse working (C processA and C Bof) with the PMDtwo phase solid-state windows array (0° which is shown in Figure2. Two di fferent capacitors (CA and CB) with two phase windows (0◦ and 180◦) Lidar,and 180°) which are is shownset under in Figure each 2.pixel Two of different the ToF capacitors chip. The (C differentialA and CB) with correlation two phase sampling windows (DCS) (0° are set under each pixel of the ToF chip. The differential correlation sampling (DCS) method was andmethod 180°) was are usedset under to demodulate each pixel the of receivedthe ToF signchip.al. TheIn general, differential the samplingcorrelation number sampling determined (DCS) used to demodulate the received signal. In general, the sampling number determined the accuracy of methodthe accuracy was used of theto demodulatedemodulation, the receivedwhile the sign efficial. encyIn general, could thebe accordinglysampling number influenced. determined In this the demodulation, while the efficiency could be accordingly influenced. In this paper, the four-step thepaper, accuracy the four-step of the demodulation, phase-shift method while was the adopteefficiencyd for could sampling. be accordingly In other words, influenced. the process In this of phase-shift method was adopted for sampling. In other words, the process of demodulation was to paper,demodulation the four-step was phase-shiftto sample themethod received was signaladopte atd forfour sampling. different Inphases other (0°, words, 90°, the180° process and 270°) of sample the received signal at four different phases (0◦, 90◦, 180◦ and 270◦) respectively by using the demodulationrespectively bywas using to sample the capacitors the received of two signal ph aseat fourwindows, different and phases then suppresses(0°, 90°, 180° the and noise 270°) by capacitors of two phase windows, and then suppresses the noise by obtaining the difference between respectivelyobtaining the by differenceusing the betweencapacitors these of two capacito phasers. windows, The phase and shift then of suppressesthe modulated the noiselight wasby these capacitors. The phase shift of the modulated light was calculated according to the sampling obtainingcalculated the according difference to thebetween sampling these amplitude. capacito rs.At Thelast thephase target shift distance of the was modulated calculated light from was the amplitude. At last the target distance was calculated from the phase shift. calculatedphase shift. according to the sampling amplitude. At last the target distance was calculated from the phase shift.

Tmod Adress Adress P11 P P 12 13 Decode Decode TmodCA 0° P P Adress Adress P1121P 22P 12 13 Decode Decode CACB 0°180° P31 P21 P22 C To Analog AmplitudeB Emitted180° Signal To Analog P31 Raedout Received Signal Raedout To Analog To BuffersAnalog Amplitude Tmod Emitted Signal Buffers Raedout Raedout sample Received Signal Buffers Tmod Buffers sample Emitted Signal

EmittedReceived Signal Signal 0 Time of Time 0° ReceivedDC0 Signal ϕ Flight 0 90° DC1 Time of Time 0° DC0 ϕ Flight 90°180° DC1DC2 DC3 0°90° 180° 270° 0° Phase shift 180°270° DC2

DC3 0°90° 180° 270° 0° Phase shift 270° FigureFigure 2.2. SignalSignal demodulationdemodulation method.method. Figure 2. Signal demodulation method. TheThe specificspecific processprocess ofof thethe four-stepfour-step phase-shiftphase-shiftmethod methodwas wasperformed performedas asfollows. follows. TheThe emittedemitted lightlightThe signalsignal specific cancan process be be presented presented of the asfour-step asE (tE)()tt = =phase-shifkAcos Acos(ω(ω )t,) t, whilemethod while the the was received received performed signal signal as follows.can can be presentedThepresented emitted asas lightRRt(()t) ==+signal BB + k kAcos Acos(can ωbe(ω t +Δtpresented+ϕ∆ )ϕ. Where). Where as ωEω () isttis =the the Acos( modulation modulationω ), while frequency, frequency,the received AA is signal the amplitudeamplitude can be ofpresentedof thethe emittedemitted as B Rtsignal,signal,()=+ B∆ Δϕ kϕ Acos(means meansω t the +Δ theϕ phase ) .phase Where shift shift ω between isbetween the themodulation emittedthe emitted signalfrequency, signal and received andA is receivedthe signal,amplitude signal,is theof B the noiseis theemitted signal noise signal, Δϕ means the phase shift between the emitted signal and received signal, B is the noise

Sensors 2020, 20, x FOR PEER REVIEW 5 of 22

signal generated during the transmission of light and k means the signal attenuation coefficient. The Sensorssampling2020 ,process20, 7329 can be expressed in Equation (1): 5 of 22 π 2π DC00ω DC ω Q=[] BktdtQBktdt +Acos(ωϕ +Δ ) =π [] + Acos( ωϕ +Δ ) 120 generated during the transmission of light and k means the signalω attenuation coefficient. The sampling process can be expressed35ππ in Equation (1): DC1122ωω DC Q=+ππ[] BktdtQBktdtAcos(ωϕ +Δ=+ ) [] Acos( ωϕ +Δ ) 12R π R32π QDC0 22=ωωω [B + kAcos(ωt + ∆ϕ)]dt QDC0 = ω [B + kAcos(ωt + ∆ϕ)]dt 1 0 2 π 23ππω (1) 3π 5π DC22DC1 ωωR 2ω DC DC1 R 2ω QQ ==ππ[] B[ +B k+Acos(kAcosωϕ(ω tt +Δ+ ∆ϕ ))] dtdt Q Q= =[[]B B+ +kAcos k Acos((ωt ωϕ+ t∆ +Δϕ)]dt ) dt 121 π 2 32π ωω2ω 2ω R 2π R 3π (1) DC2 =57ππω [ + ( + )] DC2 = ω [ + ( + )] Q1 π B kAcos ωt ∆ϕ dt Q2 2π B kAcos ωt ∆ϕ dt QBkDC3 =+22ωω[]ω Acos(ωtdtQBktdt+Δϕωϕ) DC3 =ω [] + Acos( +Δ ) 1 35ππR 5π 2 R 7π DC3 =22ωω2ω [ + ( + )] DC3 = 2ω [ + ( + )] Q1 3π B kAcos ωt ∆ϕ dt Q2 5π B kAcos ωt ∆ϕ dt 2ω 2ω DC DC i i A B where Q1 and Q2 , are the integral values of capacitors C and C at sampling point i, DCi DCi whererespectively.Q1 and Q2 , are the integral values of capacitors CA and CB at sampling point i, respectively. =−DC00 DC =− DC 11 DC DC0 QDC120 QDC0 DC 1 QDC 121 QDC1 DC0 = Q Q DC1 = Q Q (2) DC2=− Q1DC22− Q2 DC DC 3 =− Q1 DC 33− Q DC2 (2) DC2 = QDC122 QDC2 DC3 = Q 12DC3 QDC3 1 − 2 1 − 2 The distance is calculated by Equation (3): The distance is calculated by Equation (3): c 1 DC3,() x y− DC 1, () x y Dxy(),atan2=∗ ∗  ! (3) raw 22c π fDCxyDCxy1 DC3 2,(()x, y) −DC1(x 0,, ()y) Draw(x, y) = atan2 − (3) 2 × 2π f × DC2(x, y) DC0(x, y) − 2.2. Analysis of Error Sources of PMD Solid-State ArrayArray LidarLidar The PMDPMD LidarLidar is is vulnerable vulnerable to severalto several factors factors such such as internal as internal non-uniformity non-uniformity of the of ToF the sensor, ToF demodulationsensor, demodulation process, process, temperature temperature drift, ambient drift, light,ambient etc. light, Some etc. of theSome factors of the have factors been have discussed been indiscussed our previous in our work previous [42], which work will[42], not which be mentioned will not be in mentioned this paper. However,in this paper. factors However, like integration factors time,like integration temperature time, drift temperature and DRNU drift still and need DRNU to beconsidered. still need to Thebe considered. analysis of theseThe analysis factors andof these the qualitativefactors and studythe qualitative are described study in ar detaile described as follows. in detail as follows.

2.2.1. Integration Time IntegrationIntegration timetime isis thethe timetime spanspan toto outputoutput individualindividual data.data. In general, the integration time can be setset fromfrom tenstens toto thousandsthousands ofof microseconds.microseconds. To Tooo short integration time brings the loss of locallocal information,information, as shownshown inin FigureFigure3 3a.a. WhileWhile tootoo longlong integrationintegration timetime willwill exceedexceed thethe ToFToF sensor’s sensor’s receivingreceiving range,range, leading to aa locallocal saturation,saturation, asas shownshown inin FigureFigure3 3c.c. AA properproper valuevalue isis neededneeded toto capture aa susufficientfficient numbernumber ofof photoelectronsphotoelectrons withoutwithout saturation,saturation, asas shownshown inin FigureFigure3 b.3b.

(a) (b) (c)

Figure 3.3. DepthDepth imagesimages under under di differentfferent integration integration time. time. (a) under(a) under 50 µ 50s; (bμ)s; under (b) under 300 µ s300 and μ (sc )and under (c) 650underµs. 650 The μ imagess. The images were captured were captured with a flatwith board. a flat board.

2.2.2. Temperature Drift The ToF sensor is susceptible to environment temperature and the heat generated by itself during its working, leading to an uneven temperature distribution. However, the electron mobility in the Sensors 2020, 20, x FOR PEER REVIEW 6 of 22 Sensors 2020, 20, x FOR PEER REVIEW 6 of 22 2.2.2. Temperature Drift Sensors2.2.2. Temperature2020, 20, 7329 Drift 6 of 22 The ToF sensor is susceptible to environment temperature and the heat generated by itself The ToF sensor is susceptible to environment temperature and the heat generated by itself during its working, leading to an uneven temperature distribution. However, the electron mobility duringsensor isits temperature working, leading dependent. to an Theuneven higher temperature the temperature, distribution. the lower However, the electron the electron mobility, mobility leading in the sensor is temperature dependent. The higher the temperature, the lower the electron mobility, into the a non-uniformity sensor is temperature of measurement, dependent. as shownThe higher in Figure the temperature,4. the lower the electron mobility, leading to a non-uniformity of measurement, as shown in Figure 4. leading to a non-uniformity of measurement, as shown in Figure 4.

Figure 4. Non-uniform measurement due to uneven temperature distribution. Figure 4. Non-uniformNon-uniform measurement measurement due to un uneveneven temperature distribution. InIn addition,addition, thethe illuminationillumination driverdriver andand thethe externalexternal circuitcircuit alsoalso havehave temperaturetemperature dependentdependent In addition, the illumination driver and the external circuit also have temperature dependent demodulation delay,delay, whichwhich aaffectffectthe thedistance distancemeasurement. measurement. demodulation delay, which affect the distance measurement. 2.2.3. Distance Response Non-Uniformity (DRNU) 2.2.3. Distance Response Non-Uniformity (DRNU) The ToF sensorsensor typicallytypically havehave manymany analoganalog toto digitaldigital convertersconverters (ADCs)(ADCs) arrangedarranged alongalong thethe The ToF sensor typically have many analog to digital converters (ADCs) arranged along the columns of thethe pixel-field.pixel-field. The The different different ADCs ADCs ha haveve slightly different different behaviors and result in a columns of the pixel-field. The different ADCs have slightly different behaviors and result in a non-uniformity between the columns. In In addition, addition, th thisis type type of of error error exists in row and due to thethe non-uniformity between the columns. In addition, this type of error exists in row and due to the non-uniformity ofof the the row row addressing addressing signals. signals. These These two types two of types non-uniformities of non-uniformities lead to di ffleaderences to non-uniformity of the row addressing signals. These two types of non-uniformities lead to ofdifferences demodulation of demodulation from pixel from to pixel, pixel which to pixel, is called which distance is called response distance non-uniformityresponse non-uniformity (DRNU). differences of demodulation from pixel to pixel, which is called distance response non-uniformity This(DRNU). type This of error type also of error needs also to be needs compensated. to be compensated. (DRNU). This type of error also needs to be compensated. For instance, thethe phasephase shiftshift isis calculatedcalculated withwith EquationEquation (4):(4): For instance, the phase shift is calculated with Equation (4): − ϕ DCDC331− DC DC1 ϕ=atan= atanDC31− DC (4)(4) ϕ=atan DCDC202 −DC 0 (4) DC20− DC  While the real phasephase shiftshift withoutwithout DRNUDRNU compensationcompensation isis calculatedcalculated withwith EquationEquation (5):(5): While the real phase shift without DRNU compensation is calculated with Equation (5): +− +! ϕ (3)(1)(DC3 ++−a ) (DC DC1 + +b ) ϕϕ=atan= atan(3)(1)DC a− DC b (5)(5) =atan (2)(0)(DCDC2 ++− cc) ( DC0 + +d d) (5) (2)(0)DC+− c− DC + d  Figure 5 demonstrates the depth image without DRNU compensation, where the Figure5 demonstrates5 demonstrates the depththe imagedepth withoutimage DRNUwithout compensation, DRNU compensation, where the non-uniformity where the non-uniformity was obviously unneglectable. wasnon-uniformity obviously unneglectable. was obviously unneglectable.

Figure 5. Depth image without distance response non-uniformity (DRNU) compensation. Figure 5. DepthDepth image image without distance response non-uniformity (DRNU) compensation. 3. Methodology 3. Methodology Methodology The conventional calibration methods are time-consuming and complex due to the existence of The conventional calibration methods are time-c time-consumingonsuming and complex due to the existence of multiple error sources. Based on the previous work, this paper put forward an improved calibration multiple error sources. Based on the previous wo work,rk, this paper put forward an improved calibration

Sensors 2020, 20, 7329x FOR PEER REVIEW 7 of 22 method based on the electrical analog delay. The method fused various error compensations into a method based on the electrical analog delay. The method fused various error compensations into a comprehensive calibration model, where the grayscale image and the depth image were calibrated comprehensive calibration model, where the grayscale image and the depth image were calibrated jointly, as shown in Figure 6. The lens distortion was corrected using a self-adaptive interpolation jointly, as shown in Figure6. The lens distortion was corrected using a self-adaptive interpolation algorithm based on Zhang’s [49] calibration method. For grayscale image correction, DSNU and algorithm based on Zhang’s [49] calibration method. For grayscale image correction, DSNU and PRNU PRNU were compensated based on an integration time simulating based method. For depth were compensated based on an integration time simulating based method. For depth information information correction, the grayscale image was used to estimate the parameters of ambient light correction, the grayscale image was used to estimate the parameters of ambient light compensation. compensation. After calculating the raw depth data, the pixel fix pattern noise was eliminated by After calculating the raw depth data, the pixel fix pattern noise was eliminated by DRNU error DRNU error compensation. The temperature drift error was compensated at last. compensation. The temperature drift error was compensated at last.

Self-adaptive Grayscale Correlation based Depth Calibration Method Lens Distortion Grayscale Image Correction Calibration DSNU Compensation Camera Internal/External Pixel Adaptive Grayscale Image Parameter Calibration Interpolation PRNU Compensation

Ambient Light Demodulation DRNU Temperature Depth Image Compensation Error Correction Error compensation Compensation

Integration Time Raw Data Pixel Fix Pattern Temperature Drift Error Correction Calculation Noise Correction Error Calculation Depth Image Calibration

Figure 6. Figure 6. TheThe comprehensive calibration model process. 3.1. Lens Distortion Correction 3.1. Lens Distortion Correction The internal and external parameters of PMD solid-state array Lidar were obtained through The internal and external parameters of PMD solid-state array Lidar were obtained through Zhang’sSensors [49] 2020 calibration, 20, x FOR method, PEER REVIEW which is not detailed in this paper. Different from the grayscale 8 of 23 Zhang’s [49] calibration method, which is not detailed in this paper. Different from the grayscale image,Sensors there may2020, 20 exist, x FOR the PEER holes REVIEW on the depth image (the received signal is too low to demodulate a 8 of 23 image, Sensorsthere may2020, 20exist, x FOR the PEER holes REVIEW on the depth image (the received signal is too low to demodulate a 8 of 23 valid signal), resulting in the inapplicabilityTable 1. ofPixel the adaptive traditional interpolation correction strategy. algorithm. A pixel adaptive valid signal),Sensors 2020resulting, 20, x FOR in thePEER inapplicability REVIEW of the traditional correction algorithm. A pixel adaptive 8 of 23 interpolationSensors 2020 strategy, 20, x FOR we PEER proposed REVIEW inTable [42 ]1. was Pixel utilized adaptive in interpolation this paper tostrategy. solve the problem, which is 8 of 23 interpolationSensors 2020strategy, 20, x FOR weCase PEERproposed REVIEW inTable [42] 1. was Pixel Pixel utiliz Adaptiveadaptiveed in interpolation thisInterpolation paper tostrategy. solve Strategy the problem, which is 8 of 23 presented in Table1. Table 1. Pixel adaptive interpolation strategy. presented in Table 1. Case Pixel Adaptive Interpolation Strategy Case Table= (1− 1. Pixel Pixel) ×1−Adaptiveadaptive interpolation× Interpolation + (1− strategy. Strategy) × × + Case Table= (1− 1. Pixel Pixel) ×1−Adaptiveadaptive interpolation× Interpolation+ (1− strategy. Strategy) × × + Table 1.Pixel adaptive interpolation×1− × strategy. + × × Case = (1− Pixel) ×1−Adaptive× Interpolation + (1− Strategy) × × + Table 1. Pixel adaptive interpolation×1−× strategy. + × × Case = (1− Pixel) ×1−Adaptive× Interpolation + (1− Strategy) × × + Case Pixel Adaptive×1− Interpolation× Strategy + × × = (1−) ×1−× + (1−) × × + Case Pixel Adaptive×1− Interpolation× Strategy+ × × = (1−)×1−= +× +(1−−) + (1−) × × + D0 = (1 ax) (1 ×1−ay) D +× (1 ax)+ay ×D2 × =××+××+××+××− × −=× +1 +(1−−)− × × Da01234(1 -xy ) (1 - aDaaDaaDaaD ) (1 -×1− xyxy ) × (1 -+ ) × × xy +a=x (1 +ay) D+(1−−)3 + ax ay D4 × − × × × = ++ +(1−−) + = + +(1−−) D0 DuDvD==1−uD=+1 + vD2+−−+()1(1uvD×u v)D+4 × 012 = ++ +(1−−)4 + =1− +2 −× − + +2 × 2 2 =1− +× + + × xyx++p0+yp0  2 x xyp0+yp0 2 = =1−pp00 ++× pp 00 + + × DDD=×+×D0 1- 1 2 2D1 2 D4 2 014 =1−− =×+ +(1−× +×)×+ × 22 2 2  =1− = +(1−× + )× × 2  2 = +(1−)× D0 = ayD4 + 1 ay D3 D =+−×aD =()1− a+(1−× D )× 04yy= +(1− 3)× = = +(1−= )× DDD0 == Dx 0 x = = = DDNaN0 = NaN = 0 = = = = = Green points represent the projection of the corrected pixels on the raw image. Purple points = are theGreen surrounding points represent pixels of the the projectiongreen ones, of in the wh coichrrected the solid pixels purples on the mean raw theimage. real distancePurple points pixel Green points represent the projection of the corrected pixels on the raw image. Purple points whileare the the surrounding hollow ones pixels represent of the the green holes. ones, D0 in, D wh1, Dich2, D the3 and solid D4 purplesdenote themean pixels the realto be distance interpolated pixel are theGreen surrounding points represent pixels of the the projection green ones, of in the wh coichrrected the solid pixels purples on the mean raw theimage. real distancePurple points pixel whileGreen the hollow points ones represent represent the projectionthe holes. Dof0, theD1, coD2rrected, D3 and pixels D4 denote on the the raw pixels image. to be Purple interpolated points andare the its surroundinglower-left pixel, pixels top-left of the pixel,green lower-rightones, in wh ichpixel the and solid top-right purples meanpixel, therespectively. real distance (xp0 pixel,yp0), whileGreen the hollow points ones represent represent the projectionthe holes. Dof0, theD1, coD2rrected, D3 and pixels D4 denote on the the raw pixels image. to be Purple interpolated points areand the its surroundinglower-left pixel, pixels top-left of the pixel,green ones,lower-right in wh ichpixel the and solid top-right purples meanpixel, therespectively. real distance (xp0 pixel,yp0), (whilexp1,yp1 the), ( xhollowp2,yp2), ones(xp3,y p3represent) and (x p4the,yp4 holes.) are theD0, Dcoordinates1, D2, D3 and of D the4 denote pixels the to pixelsbe interpolated to be interpolated and its areand the its surroundinglower-left pixel, pixels top-left of the pixel,green ones,lower-right in wh ichpixel the and solid top-right purples meanpixel, therespectively. real distance (xp0 pixel,yp0), while(xp1,yp1 the), ( xhollowp2,yp2), ones(xp3,y p3represent) and (x p4the,yp4 holes.) are Dthe0, Dcoordinates1, D2, D3 and of D the4 denote pixels the to pixelsbe interpolated to be interpolated and its lower-leftand its lower-left pixel, top-left pixel, pixel,top-left lower-right pixel, lower-right pixel and pixeltop-right and pixel,top-right respectively. pixel, respectively. αx = xp0 − xp 1(, xαp0y ,=y p0yp),0 while(xp1,yp1 the), ( xhollowp2,yp2), ones(xp3,y p3represent) and (x p4the,yp4 holes.) are Dthe0, Dcoordinates1, D2, D3 and of D the4 denote pixels the to pixelsbe interpolated to be interpolated and its andlower-left its lower-left pixel, top-left pixel, pixel,top-left lower-right pixel, lower-right pixel and pixeltop-right and pixel,top-right respectively. pixel, respectively. αx = xp0 − xp 1(, xαp0y ,=y p0y),p0 −(x p1yp,1y andp1), (x(up2,v,y)p2 are), ( xthep3,y p3coordinates) and (xp4, yofp4) theare pixelsthe coordinates to be interpolated of the pixels under to a bebarycentric interpolated coordinate and its andlower-left its lower-left pixel, top-left pixel, pixel,top-left lower-right pixel, lower-right pixel and pixeltop-right and pixel,top-right respectively. pixel, respectively. αx = xp0 − xp 1(, xαp0y ,=y p0y),p0 (−x p1yp,1y andp1), ( x(up2,,vy)p2 are), ( xthep3,y p3coordinates) and (xp4,y ofp4) theare pixelsthe coordinates to be interpolated of the pixels under to a be barycentric interpolated coordinate and its system.lower-left pixel, top-left pixel, lower-right pixel and top-right pixel, respectively. αx = xp0 − xp1, αy = yp0 (−x p1yp,1y andp1), ( x(up2,,vy)p2 are), ( xthep3,y p3coordinates) and (xp4,y ofp4) theare pixelsthe coordinates to be interpolated of the pixels under to a be barycentric interpolated coordinate and its lower-leftsystem. pixel, top-left pixel, lower-right pixel and top-right pixel, respectively. αx = xp0 − xp1, αy = yp0 − yp1 and (u,v) are the coordinates of the pixels to be interpolated under a barycentric coordinate lower-leftsystem. pixel, top-left pixel, lower-right pixel and top-right pixel, respectively. αx = xp0 − xp1, αy = yp0 3.2.system.− yp 1Grayscale and (u,v )Image are theCalibration coordinates of the pixels to be interpolated under a barycentric coordinate −system.3.2. yp 1Grayscale and (u,v )Image are the Calibration coordinates of the pixels to be interpolated under a barycentric coordinate system.3.2. GrayscaleA grayscale Image image Calibration is acquired by TOF chip when PMD solid-state Array Lidar works in the passive3.2. GrayscaleA grayscalemode. Image The image acquisitionCalibration is acquired process by is TOF consistent chip wh withen PMDthe intensity solid-state image Array obtained Lidar byworks traditional in the 3.2. GrayscaleA grayscale Image image Calibration is acquired by TOF chip when PMD solid-state Array Lidar works in the 3.2.complementarypassive Grayscale mode. Image The metal acquisitionCalibration oxide semiconductor process is consistent (CMOS) with chip, the whichintensity can image be used obtained to characterize by traditional the passiveA grayscalemode. The image acquisition is acquired process by is TOF consistent chip wh withen PMDthe intensity solid-state image Array obtained Lidar byworks traditional in the intensitycomplementaryA grayscale of ambient metal image light. oxide is acquired semiconductor by TOF chip(CMOS) when chip, PMD which solid-state can be Array used Lidarto characterize works in the complementarypassiveA grayscale mode. The metal image acquisition oxide is acquired semiconductor process by isTOF consistent chip(CMOS) wh withen chip, PMDthe whichintensity solid-state can image be Array used obtained Lidarto characterize byworks traditional in the passiveintensityIn general,mode. of ambient The the acquisitiongrayscale light. image process is vulnerableis consistent to with DSNU the and intensity PRNU, image where obtained DSNU representsby traditional the passiveintensitycomplementary mode. of ambient The metal acquisition light. oxide semiconductor process is consistent (CMOS) with chip, the whichintensity can image be used obtained to characterize by traditional the differencescomplementaryIn general, of gray themetal valuesgrayscale oxide betw semiconductorimageeen pixels is vulnerable captured (CMOS) to under DSNU chip, the andwhich dark PRNU, cancondition wherebe used and DSNU to PRNU characterize represents represents the complementaryintensityIn general, of ambient themetal grayscale light. oxide semiconductorimage is vulnerable (CMOS) to DSNU chip, andwhich PRNU, can wherebe used DSNU to characterize represents the theintensitydifferences differences of ofambient gray of gray values light. values betw betweeneen pixels pixels captured captured under under the the dark common condition condition, and PRNU respectively. represents intensitydifferencesIn general, of ofambient gray the valuesgrayscale light. betw imageeen pixels is vulnerable captured to under DSNU the and dark PRNU, condition where and DSNU PRNU represents represents the the differencesTheIn general, most of commonlythe gray grayscale values used imagebetween approach is vulnerablepixels to captured correct to DSNU underthe andinfluence the PRNU, common of where DSNU condition, DSNU and respectively.representsPRNU has the a thedifferences differencesIn general, of gray of the gray valuesgrayscale values betw imagebetweeneen pixels is vulnerablepixels captured captured to underDSNU under the and the dark PRNU, common condition where condition, and DSNU PRNU respectively. represents represents the standardizeddifferencesThe most of process,gray commonly values which betw used caneen beapproach pixelsfound capturedin toeuropean correct under machinethe the influence dark vision condition ofassociation DSNU and and (EMVA)PRNU PRNU represents standard has a differencesthe differencesThe most of gray ofcommonly gray values values betw used betweeneen approach pixels pixels captured to captured correct under underthe the influence the dark common condition of DSNU condition, and and PRNU respectively. PRNU represents has a 1288thestandardized differences [50]. In thisprocess, of grayapproach, whichvalues canone between bedark found pixelsimage in europeancaptured and one undermachinebright the image vision common are association captured condition, (EMVA) under respectively. thestandard same thestandardized differencesThe most process, ofcommonly gray whichvalues used can between beapproach found pixels in to europeancaptured correct undermachinethe influence the vision common ofassociation DSNU condition, and (EMVA) respectively. PRNU standard has a exposure1288 The[50]. most condition.In this commonly approach, DSNU usedand one PRNUapproachdark image are thento ancorrect dca onelculated thebright influencefrom image the areimages.of DSNUcaptured However, and under PRNU there the has sameexist a 1288standardized The[50]. most In thisprocess, commonly approach, which used canone beapproachdark found image in to european ancorrectd one machinethebright influence image vision are ofassociation DSNUcaptured and (EMVA) under PRNU thestandard hassame a somestandardizedexposure limitations condition. process, of the DSNU which approach. and can bePRNU For found instance, are in then european th cae lculatedimages machine are from capturedvision the images.association unde rHowever, a specific(EMVA) therecondition, standard exist standardizedexposure1288 [50]. condition.In process,this approach, DSNU which and canone bePRNUdark found image are in then european an dca onelculated machine bright from image vision the are images.association captured However, (EMVA) under there thestandard sameexist leading1288some [50].limitations to badIn this applicability. ofapproach, the approach. Theone ambientdark For instance,image light an is th dsee one timages artificially, bright are image capturedwhich are introduces undecapturedr a specificextra under error. condition,the Basedsame 1288someexposure [50].limitations condition.In this ofapproach, the DSNU approach. andone PRNUdark For instance,image are then an th dcae one lculatedimages bright are from image captured the areimages. undecaptured rHowever, a specific under there condition,the sameexist onexposureleading this approach,to condition. bad applicability. an DSNU integration andThe PRNU ambienttime simulating are light then is seca basedtlculated artificially, method from which thewas images. introducesproposed However, in extra this error. paper.there Based existThe exposureleadingsome limitations to condition. bad applicability. of theDSNU approach. andThe PRNUambient For instance, are light then is th secae tlculatedimages artificially, arefrom captured which the images. introduces unde rHowever, a specificextra error. therecondition, Based exist mainsomeon this contributionslimitations approach, of an ofthe integrationthe approach. method timemainlyFor instance,simulating include: th e(I)based images Instead method are of capturedsetting was proposedthe unde ambientr a inspecific lightthis paper.artificially, condition, The someonleading this limitations approach,to bad applicability. of an the integration approach. The ambient timeFor instance,simulating light is th se ebased timages artificially, method are captured which was introducesproposed under a in specificextra this error. paper. condition, Based The theleadingmain levels contributions to ofbad ambient applicability. of light the method were The simulated ambient mainly lightinclude: by setting is se (I)t artificially, Insteadthe integration of settingwhich times introducesthe andambient (II) extra the light non-uniform error. artificially, Based leadingmainon this contributions approach,to bad applicability. an of integrationthe method The ambient timemainly simulating lightinclude: is se (I)basedt artificially, Instead method of settingwhich was introducesproposedthe ambient inextra lightthis error. paper.artificially, Based The ofonthe the thislevels exposure approach, of ambient was an eliminated lightintegration were bysimulated time calculating simulating by setting the meanbased the valueintegrationmethod of multiplewas times proposed ambientand (II) in the lightthis non-uniform paper.levels. The onthemain thislevels contributions approach, of ambient an of light integrationthe method were simulated timemainly simulating include: by setting (I)based Insteadthe integrationmethod of setting was times proposedthe andambient (II) in the lightthis non-uniform paper.artificially, The processthemainof the levels contributions exposure of ofthe ambient method was of eliminated light theis given method were as bysimulated follows. mainly calculating include: by setting the (I)mean Insteadthe valueintegration of of setting multiple times the ambientand (II) thelight non-uniform artificially,levels. The mainoftheprocess the levels contributions exposure of ofthe ambient method was of eliminated light theis given method were as bysimulated follows.mainly calculating include: by setting the (I)mean Insteadthe valueintegration of of setting multiple times the ambientand (II) thelight non-uniform artificially,levels. The of the exposure was eliminated by calculating the mean value of multiple ambient light levels. The the(1).processof the levelsSet exposure ofthe ofthe ambientintegration method was eliminatedlight is giventime were toas bysimulated 0follows. calculatingμs to simulate by setting the meanthe the dark valueintegration condition. of multiple times Collect andambient (II) N the =light 100 non-uniform levels.frames The of process(1). grayscaleSet ofthe the integration methodimages andis giventime calculate toas 0follows. μthes to mean simulate value. the dark condition. Collect N = 100 frames of ofprocess the exposure of the method was eliminated is given as by follows. calculating the mean value of multiple ambient light levels. The (1). Setgrayscale the integration images and time calculate to 0 μthes to mean simulate value. the dark condition. Collect N = 100 frames of process (1). Set ofthe the integration method is giventime toas 0follows. μs to simulate the dark condition. Collect N = 100 frames of (1). grayscaleSet the integration images and time calculate to 0 μthes to mean simulate value. the dark condition. Collect N = 100 frames of grayscale images and calculate the mean value. (1). Setgrayscale the integration images and time calculate to 0 μthes to mean simulate value. the dark condition. Collect N = 100 frames of grayscale images and calculate the mean value.

Sensors 2020, 20, 7329 8 of 22

Green points represent the projection of the corrected pixels on the raw image. Purple points are the surrounding pixels of the green ones, in which the solid purples mean the real distance pixel while the hollow ones represent the holes. D0, D1, D2, D3 and D4 denote the pixels to be interpolated and its lower-left pixel, top-left pixel, lower-right pixel and top-right pixel, respectively. (xp0,yp0), (xp1,yp1), (xp2,yp2), (xp3,yp3) and (xp4,yp4) are the coordinates of the pixels to be interpolated and its lower-left pixel, top-left pixel, lower-right pixel and top-right pixel, respectively. αx = xp xp , αy = yp yp 0 − 1 0 − 1 and (u,v) are the coordinates of the pixels to be interpolated under a barycentric coordinate system.

3.2. Grayscale Image Calibration A grayscale image is acquired by TOF chip when PMD solid-state Array Lidar works in the passive mode. The acquisition process is consistent with the intensity image obtained by traditional complementary metal oxide semiconductor (CMOS) chip, which can be used to characterize the intensity of ambient light. In general, the grayscale image is vulnerable to DSNU and PRNU, where DSNU represents the differences of gray values between pixels captured under the dark condition and PRNU represents the differences of gray values between pixels captured under the common condition, respectively. The most commonly used approach to correct the influence of DSNU and PRNU has a standardized process, which can be found in european machine vision association (EMVA) standard 1288 [50]. In this approach, one dark image and one bright image are captured under the same exposure condition. DSNU and PRNU are then calculated from the images. However, there exist some limitations of the approach. For instance, the images are captured under a specific condition, leading to bad applicability. The ambient light is set artificially, which introduces extra error. Based on this approach, an integration time simulating based method was proposed in this paper. The main contributions of the method mainly include: (I) Instead of setting the ambient light artificially, the levels of ambient light were simulated by setting the integration times and (II) the non-uniform of the exposure was eliminated by calculating the mean value of multiple ambient light levels. The process of the method is given as follows.

(1). Set the integration time to 0 µs to simulate the dark condition. Collect N = 100 frames of grayscale images and calculate the mean value.

1 X320 X240 Ydark avg = (x, y, N)dark/(320 240) (6) − N x=1 y=1 ×

(2). Change the integration time to simulate different ambient light levels. Collect N = 100 frames of grayscale images under amplitudes of 10%, 30%, 50% and 80% respectively. Similarly, the mean values with different amplitudes are obtained.

1 X320 X240 YAL avg = (x, y, N)AL/(320 240) (7) − N x=1 y=1 ×

(3). Calculate the spatial variances under different ambient levels. Spatial variance is simply an overall measure of the spatial nonuniformity, which is helpful to estimate DSNU and PRNU.

2 2 P320 P240 h i S = x=1 y=1 (x, y)dark Ydark avg /(320 240 1) dark − − × − 2 (8) 2 P320 P240 h i S = x=1 y=1 (x, y)AL YAL avg /(320 240 1) AL − − × −

(4). Calculate the correction values of DSNU and PRNU.

bDSNU = Sdark q   (9) 2 2 kPRNU = S S / YAL avg Ydark avg AL − dark − − − Sensors 2020, 20, 7329 9 of 22

(5). The grayscale compensation of pixel (x, y) is calculated by Equation (10).

Icorr(x, y) = [Iraw(x, y) b (x, y)] k (10) − DSNU × PSNU

where Ydark avg is the mean value of grayscale images under the dark condition, YAL avg is the − − N S2 S2 mean value of grayscale images under ambient light, means the number of frames, dark and AL are spatial variances under dark and ambient light conditions, respectively, bDSNU is the offset of DSNU, kPRNU is the gain of PRNU and Icorr(x, y) is the compensation value of grayscale image after calibration.

3.3. Depth Image Calibration

3.3.1. Ambient Light Compensation Due to its special structure, the PMD solid-state array Lidar has the ability to obtain a grayscale image and depth image simultaneously. Meanwhile, the amplitude of a grayscale image has a close relationship with ambient light. Based on this, the grayscale image was used to estimate the parameters of ambient light compensation in depth calibration, which is the self-adaptive grayscale correlation based depth calibration method (SA-GCDCM). The idea of the method is to eliminate the influence of ambient light in the sampling stage by introducing an ambient light correction factor KAL. The factor KAL is calculated from several DCs sampled under different ambient light levels. The ambient light is controlled accurately by adjusting the integration time. KAL is then utilized to correct the DCs in the real sampling process. There exists internal noise and external error during the sampling. The errors are corrected in the calculation. The process of the method is concluded as (all the following measurements use the spatial average of the region of interest (ROI) within the coordinates (100,70) and (220,165) and the temporal average of 100 frames as a default):

(1). Turn the ambient light on and record the amplitude of the grayscale image as Qgray. (2). Change the amplitude of the grayscale image to 0.5 times that of Qgray by adjusting the integration time. Measure the DC0/2 and record as DC0setting1 and DC2setting1, respectively. (3). Change the amplitude of the grayscale image to 1.5 times that of Qgray by adjusting the integration time. Measure the DC0/2 and record as DC0setting2 and DC2setting2, respectively. (4). Turn the ambient light off and measure the DC0/2, which are recorded as DC0no and DC2no, respectively. (5). Calculate four measurements, Q01, Q02, Q21 and Q22.

Q0 = DC0 DC0no Q2 = DC2 DC2no 1 setting1 − 1 setting1 − (11) Q0 = DC0 DC0no Q2 = DC2 DC2no 2 setting2 − 2 setting2 − (6). Correct the errors generated in the sampling. There inevitably exists internal noise and external error. The internal noise mainly comes from the internal circuit and can be eliminated by subtracting two samples at the same phase. The external error mainly comes from the instability of the environment. It can be suppressed by calculating the mean value of the samples.

(Q22 Q2 ) + (Q02 Q0 ) k = − 1 − 1 (12) 2

(7). The ambient light correction factor KAL is calculated by Equation (13):

Qgray K = (13) AL k Sensors 2020, 20, x FOR PEER REVIEW 10 of 22

(7). The ambient light correction factor KAL is calculated by Equation (13):

Qgray Sensors 2020, 20, 7329 K = 10(13) of 22 AL k

where DC0/1/2/3 are the sample values acquired at 0°, 90°, 180° and 270° respectively. KAL is used to where DC0/1/2/3 are the sample values acquired at 0◦, 90◦, 180◦ and 270◦ respectively. KAL is used compensateto compensate the ambient the ambient light error light during error during the demodulation the demodulation process process as Equation as Equation (14): (14): I ()xy, ()=− ()corr DC0/1corr x , y DC 0/1 x , y Icorr(x, y) (14) DC0/1corr(x, y) = DC0/1(x, y) K (14) − KALAL () () where DCxy0/1corr , represents the corrected value of DCxy0/1 , . where DC0/1corr(x, y) represents the corrected value of DC0/1(x, y). As introduced in Section 2.1, distance measurements are taken by acquiring the four DCs and calculatedAs introduced pixel-by-pixel in Section during 2.1 ,runtime distance as measurements Equation (15): are taken by acquiring the four DCs and calculated pixel-by-pixel during runtime as Equation (15): c 1 DCxyDCxy3,()− 1, () Dxy(),atan2=∗ ∗  ! (15) raw 22πcf 1 DC 2,DC() x3( yx, y−) DCDC 0,1 ()( xx, yy) Draw(x, y) = atan2 − (15) 2 × 2π f × DC2(x, y) DC0(x, y) The equation is revised after ambient light compensation as− Equation (16): The equation is revised after ambient light compensation()− as Equation () (16): c 1 DCxyDCxy3, 1corr , Dxy(),atan2=∗ ∗  ! (16) raw 22πcf 1 DC 2,DC() x3( yx, y−) DCDC 01corr ()( xx ,, y) Draw(x, y) = atan2 − corr (16) 2 × 2π f × DC2(x, y) DC0corr(x, y) () () − where DCxy1,corr and DCxy0,corr are sample values corrected by KAL . where DC1 (x, y) and DC0 (x, y) are sample values corrected by K . Throughcorr SA-GCDCM,corr the disturbance of ambient light couldAL be effectively eliminated. Through SA-GCDCM, the disturbance of ambient light could be effectively eliminated. Compared Compared with the traditional joint method using a common RGB camera with a ToF camera, this with the traditional joint method using a common RGB camera with a ToF camera, this method has no method has no requirement of coordinate transformation and feature matching, leading to a better requirement of coordinate transformation and feature matching, leading to a better data consistency data consistency and self-adaptability. and self-adaptability. 3.3.2. Demodulation Error Correction 3.3.2. Demodulation Error Correction In general, the sinusoidal wave is adopted as the modulated continuous wave signal, which can In general, the sinusoidal wave is adopted as the modulated continuous wave signal, which can be represented as E()tt= Acos(ω ). Similarly, the received signal is deemed as a sinusoidal wave in be represented as E(t) = kAcos(ωt). Similarly, the received signal is deemed as a sinusoidal wave in demodulationdemodulation as as well. well. However,However, because because of of the the limitations limitations of of the the generator generator bandwidth, bandwidth, the the actual actual receivedreceived signal signal is is similar similar to to a a rectangular rectangular wave wave [ 42[42],], as as shown shown in in Figure Figure7. 7. Therefore, Therefore, the the rectangular rectangular wavewave was was used used for for demodulation demodulation analysis analysis in in this this paper. paper.

(a) (b)

FigureFigure 7. 7.The The actual actual sinusoidal sinusoidal modulation modulation signal signal and and its its Fourier Fourier transform. transform. ( a()a) Actual Actual sinusoidal sinusoidal modulationmodulation signal. signal. ( b(b)) Fast Fast Fourier Fourier transform transform of of the the sinusoidal sinusoidal modulation modulation signal. signal.

DiDifferentfferent from from Section Section 2.1 2.1,, the the sampling sampling process process is is modified modified as as Equation Equation (17): (17): DC0 = π DC0 = ( π ) π Q1 AtToF 0 < tToF < w Q2 A w tToF 0 < tToF < w  DC1 π π  DC1 − π π  Q = A( tToF) 0 < tToF <  Q = A(t + ) 0 < t <  1 2w − 2w  2 ToF 2w ToF 2w  DC1 π π π  DC1 3π π π  Q = A(tToF w ) w < tToF < w  Q = A( w tToF) w < tToF < w 1 − 2 2 2 2 − 2 (17) QDC2 = A( π t ) 0 < t < π QDC2 = At 0 < t < π  1 w ToF ToF w  2 ToF ToF w DC3 − π π DC3 π π  Q = A(tToF + ) 0 < tToF <  Q = A( tToF) 0 < tToF <  1 2w 2w  2 2w − 2w  DC3 3π π π  DC3 π π π  Q = A( tToF) < tToF <  Q = A(tToF ) < tToF < 1 2w − 2w w 2 − 2w 2w w Sensors 2020, 20, 7329 11 of 22

Thus, an extra error, which is called the demodulation error is generated in the revising of the sampling process. The method to correct demodulation error has been discussed extensively in [42], which will not be introduced in this paper.

3.3.3. DRNU Error Compensation The calibration was based on the electrical analog delay method. Instead of changing the real distance, delay-locked loop was used to simulate the distance in this method. The simulated distance is composed of two parts, as shown in Equation (18). The first part is the simulated distance of DLLs. The system contains several DLLs and each DLL represents a specific simulated distance, e.g., 0.3 m. The second part represents the real distance between the calibration plate and the PMD Lidar. Through combining the two parts, multiple distances could be simulated without moving the calibration plate. However, DLL is susceptible to temperature changing and circuit delay, leading to a deviation between simulated distance and the set distance. Thus, a DRNU error compensation was conducted as Equation (19): D (x, y) = n dDLL + ozero (18) sim × DRNU(x, y) = D (x, y) D (x, y) (19) cal − sim where dDLL represents the simulated distance of a single DLL, n is the number of DLLs, ozero is the distance between the PMD solid-state array Lidar and the reflecting plate, Dsim(x,y) represents the overall simulated distance, Dcal(x,y) represents the corrected distance after compensation and DRNU(x,y) is the compensation value. Since the DRNU error is related with distance, and the limited number of compensate values cannot completely cover the whole distance. A linear interpolation was carried out to obtain a continuous offset curve. The interpolate method is quite basic and will not be illustrated here.

3.3.4. Temperature Compensation Several research have reported the influence of the temperature for the ToF camera [7,15,35]. In this paper, the main components related to temperature error in the PMD Lidar were further classified into three parts, which are the ToF sensor, the illumination driver and the external circuit, as discussed in Section 2.2.2. It was found from experiments that the error showed a linear relation with temperature, from which the higher the temperature, the higher the error was. Since the error arose from temperature drift was compensated with a joint equation, as shown in Equation (20):   D (x, y) = D (x, y) (Tact T ) TC + TC + n TCDLL (20) f inal cal − − cal × pix laser × where Dfinal(x,y) is the corrected distance after temperature compensation, Tact represents the acting temperature, Tcal means the temperature during the calibration, TCpix is the temperature coefficient of the pixel, TClaser means the temperature coefficient of the illumination unit and TCDLL represents the temperature coefficient of DLL stage. For the device in this paper, the TCpix was 11.3 mm/K, the TClaser was 2.7 mm/K and the TCDLL was 0.7 mm/K. It is worth mentioning that the parameters were obtained by the specific device, which means the parameters are not applicable for each device. This is due to the diversity of the circuit board, chip and other components derived from fabrication. After utilizing the temperature compensation, the multiscene adaptive multifactor calibration model was established. Then the feasibility of the model was verified by experiments in Section4. Sensors 2020, 20, x FOR PEER REVIEW 12 of 23

Since the DRNU error is related with distance, and the limited number of compensate values cannot completely cover the whole distance. A linear interpolation was carried out to obtain a continuous offset curve. The interpolate method is quite basic and will not be illustrated here.

3.3.4. Temperature Compensation Several research have reported the influence of the temperature for the ToF camera [7,15,35]. In this paper, the main components related to temperature error in the PMD Lidar were further classified into three parts, which are the ToF sensor, the illumination driver and the external circuit, as discussed in Section 2.2.2. It was found from experiments that the error showed a linear relation with temperature, from which the higher the temperature, the higher the error was. Since the error arose from temperature drift was compensated with a joint equation, as shown in Equation (20):

Sensors퐷푓푖푛푎푙 2020, 20(푥, x, 푦FOR) = PEER퐷푐푎푙 REVIEW(푥, 푦 ) − (푇푎푐푡 − 푇푐푎푙) × (푇퐶푝푖푥 + 푇퐶푙푎푠푒푟 + 푛 × 푇퐶퐷퐿퐿) 12 of (2220 ) wherewhere D finalDfinal(x,y(x,y)) is isthe the corrected corrected distance distance afterafter temperaturetemperature compensation,compensation, T Tactact represents represents the the acting acting temperature,temperature, T caTl cameansl means the the temperature temperature duringduring the calibration, TCTCpixpix is is the the temperature temperature coefficient coefficient of ofthe the pixel, pixel, TC TClaserlaser means means the the temperature temperature coefficientcoefficient of thethe illuminationillumination unit unit and and TC TCDLLDLL represents represents thethe temperature temperature coefficient coefficient of of DLL DLL stage. stage. ForFor the the device device in in this this paper, paper, the the TCTCpixpix waswas 11.11.33 mm/K,mm/K, thethe TCTClaserlaser was was 2.7 2.7 mm/K mm/K and and the the TC TCDLLDLL waswas 0. 70.7 m m/K.mm/K. It Itis is worth worth mentioning mentioning ththatat thethe parametersparameters werewere obtainedobtained by by the the specific specific device, device, whichwhich means means the the parameters parameters are are not not applicableapplicable for each device.device. ThisThis is is due due to to the the diversity diversity of ofthe the circuitcircuit board, board, chip chip and and other other components components derivedderived from fabrication.fabrication . AfterAfter utilizing utilizing the the temperature temperature compensation,compensation, ththee multiscenemultiscene adaptive adaptive multifactor multifactor calibration calibration Sensorsmodelmodel2020 was was, 20 established., 7329established. Then Then the the feasibility feasibility ofof thethe model waswas verifiverifieded by by experiments experiments in in Section Section 4.12 4. of 22

4. Experiments4. Experiments and and Discussions Discussions 4. Experiments and Discussions 4.1. Experimental Settings 4.1. Experimental Experimental Settings The PMD solid-state array Lidar is shown in Figure 8, which is mainly composed of four parts: The PMD solid-state array Lidar is shown in Figure 8, which is mainly composed of four parts: theThe emitting PMD unit, solid-state the receiving array Lidarunit, the is shownprocessing in Figure unit and8, which the transmission is mainly composedunit. The emitting of four unit parts: the emitting unit, the receiving unit, the processing unit and the transmission unit. The emitting unit thegenerates emitting and unit, emits the receiving the NIR light unit, with the processing the VCSEL. unit The and receiving the transmission unit receives unit. the The returned emitting light unit generates and emits the NIR light with the VCSEL. The receiving unit receives the returned light generateswith a CMOS and emits sensor the and NIR converts light with the optical the VCSEL. signal Theinto receivingan electrical unit signal. receives The theprocessing returned unit light with a CMOS sensor and converts the optical signal into an electrical signal. The processing unit withcalculates a CMOS the sensor distance and data converts by demodulating the optical the signal phase into delay an betw electricaleen the signal. emitted The and processing the detected unit calculates the distance data by demodulating the phase delay between the emitted and the detected calculatessignal. The the transmission distance data unit by transmit demodulatings the distance the phase data delayto the betweencomputer. the emitted and the detected signal. The transmission unit transmits the distance data to the computer. signal. The transmission unit transmits the distance data to the computer.

Emitting Emitting VCSEL unit VCSEL unit Receiving Receivingunit unit Processing Processing Transmission unit Transmission unit unitunit

Figure 8. The PMD solid-state array Lidar. FigureFigure 8. 8. The The PMDPMD ssolid-stateolid-state arrayarray Lidar. Lidar . As illustrated in Figure9, the experimental settings were established. The grayscale image As illustrated in Figure 9, the experimental settings were established. The grayscale image calibrationAs illustrated system, as in shown Figure in 9, Figure the experimental9a, was used settings to calibrate were the established. lens distortion The and grayscale eliminate image the calibrationcalibration system, system, as as shown shown in in Fig Figureure 9a, 9a, waswas used to calibrate the the lens lens distortion distortion and and eliminate eliminate the the effecteffect of of DSNU DSNU and and PRNU. PRNU. The The depth depth imageimage calibrationcalibration system, system, as as shown shown in in Figure Figure 9b,9b, was was used used to to effect of DSNU and PRNU. The depth image calibration system, as shown in Figure 9b, was used to compensatecompensate multierror multierror sources sources in in distance distance measurements.measurements. compensate multierror sources in distance measurements.

(a) (b)

FigureFigure 9. 9.Experimental Experimental settings. settings. ((aa)) TheThe grayscalegrayscale image ca calibrationlibration system system and and (b ()b the) the depth depth image image calibrationcalibration system. system.

The grayscale image calibration system mainly includes the checkerboard, the PMD solid-state array Lidar, the clamping device and the rail. Based on the system, the grayscale image calibration was conducted as follows.

(1). Clamp the PMD Lidar on the clamping device. (2). Adjust the clamping device to a proper location where the checkerboard is suitable in size and position in the field of view of the PMD Lidar. (3). Calibrate the grayscale image based on integration time simulating. (4). Obtain several grayscale images with checkerboard in different directions to calibrate the lens distortion. (5). Utilize the pixel adaptive interpolation strategy to fill the holes. Sensors 2020, 20, x FOR PEER REVIEW 13 of 22

The grayscale image calibration system mainly includes the checkerboard, the PMD solid-state array Lidar, the clamping device and the rail. Based on the system, the grayscale image calibration was conducted as follows. (1). Clamp the PMD Lidar on the clamping device. (2). Adjust the clamping device to a proper location where the checkerboard is suitable in size and position in the field of view of the PMD Lidar. (3). Calibrate the grayscale image based on integration time simulating. Sensors(4). Obtain2020, 20 ,several 7329 grayscale images with checkerboard in different directions to calibrate the13 lens of 22 distortion. (5). Utilize the pixel adaptive interpolation strategy to fill the holes. The depth image calibration system mainly includes the reflecting plate, the PMD solid-state array Lidar,The the depth cylindrical image tube, calibration the ambient system light mainly source, inclu thedes clamping the reflecting device and plate, the the rail. PMD The cylindricalsolid-state tubearray was Lidar, used the to cylindrical protect the tube, ToF sensor the ambient from a fflightecting source, the stray the light.clamping The ambientdevice and light the source rail. wasThe usedcylindrical to provide tube thewas assistant used to lighting.protect the Based ToF on sensor the system, from affecting the depth the image stray calibration light. The was ambient conducted light assource follows. was used to provide the assistant lighting. Based on the system, the depth image calibration was conducted as follows. (1).(1). InstallInstall the the cylinder cylinder on on the the PMD PMD Lidar and cl clampamp the PMD Lidar on the clamping device. (2).(2). AdjustAdjust the the clamping clamping device to a proper location where the quality of the light spot projected on thethe reflecting reflecting plate is optimized. (3).(3). ChangeChange the the distance distance with with the electrical analog delay method to to perform the depth calibration. MultipleMultiple error error compensation is included in this step. (4).(4). ChangeChange the the reflecting reflecting board to adjust the methodmethod with objects of didifferentfferent reflectivities.reflectivities. (5).(5). ConductConduct the the interpolation on the data to obtain the continuous offset offset curves.

4.2. Results with Grayscale Image Calibration

4.2.1. Lens Distortion Correction Several grayscalegrayscale imagesimages with with checkerboard checkerboard in in di differentfferent directions directions were were obtained obtained to calibrateto calibrate the lensthe lens distortion. distortion. The The pixel pixel adaptive adaptive interpolation interpolation strategy strategy was was used used to fill to thefill holes.the holes. The The results results are shownare shown Figure Figure 10. 10.

(a) (b)

Figure 10. Results of lens distortion correction. ( a) Before calibration and (b) after calibration.

The internal parameters and the distortiondistortion coecoefficientsfficients areare shownshown inin TableTable2 .2.

Table 2.2. Lens parameters.

fx fy cx cy InternalInternal ParametersParameters fx fy cx cy 208.915 209.647 159.404 127.822 k k p p Distortion Coefficients 1 2 1 2 0.37917 0.17410 0.00021 0.00124 −

4.2.2. DSNU and PRNU The influences of DSNU and PRNU were eliminated based on the integration time simulating method introduced in Section 3.2. Firstly, the raw grayscale images were acquired under multiple ambient light levels through setting the integration time in several values. Then the spatial variances were calculated from the images to estimate the dark signal non-uniformity (DSNU) and photo Sensors 2020, 20, 7329 14 of 22 response non-uniformity (PRNU). At last the influence of DSNU and PRNU were eliminated by a correction algorithm. To evaluate the effectiveness of the method, several experiments were conducted in different scenes. A checkerboard and a flat white board were used as the test scenes to conduct a qualitative analysis and a quantitative analysis. Then gray images were captured under two real scenes to verify the feasibility of the method. The results are shown in Figure 11. The images in the left column were captured before calibration, while the images in the right column were captured after calibration. Figure 11a,b were captured with the checkerboard. Compared with the image before calibration, two types of non-uniformity were compensated. In the raw image, the central area was brighter while the surroundings were darker because of the uneven exposure. Meanwhile, there existed distinct light and dark stripes in vertical. In the images after calibration, these two phenomena were suppressed obviously. To better prove the effectiveness of the method, a quantitative analysis was then conducted. A flat white board was suitable to conduct the analysis because of its good flatness and smoothness. The images are shown in Figure 11c,d. The improvement in visual was consistent with results of the checkerboard. Two types of non-uniformities were compensated obviously. To better verify the improvement, mean value, root mean square error (RMSE) and peak signal to noise ratio (PSNR) [51] were chosen to evaluate the quality of the images, and the results are shown in Table3. The mean value shows little difference before and after the calibration, which means the method did not change the overall sampling of the grayscale signal. However, the RMSE got a significant reduction after calibration, which indicates the uniformity of the grayscale signal was improved distinctly. In addition, the PSNR improved after calibration, which means the noise derived from PRNU and DSNU was reduced. Experiments were then conducted in two real scenes to verify the applicability of the method in reality, as shown in Figure 11e–h. It can be seen that the vertical stripes were effectively suppressed after calibration. Meanwhile, the uneven exposure, which leads to uneven brightness of the image, was obviously suppressed. Similarly, quantitative analysis was conducted and the results are shown in Table3. The mean values showed no distinct di fferences before and after the calibration, while the reduction of the RMSEs was obvious. It means that the non-uniformity of the grayscale images was suppressed. The PSNRs were higher after calibration, which means the noise was effectively reduced. The results in real scenes were in accordance with results in test scenes, which mean the calibration method is feasible in reality.

Table 3. Quantitative analysis of the grayscale calibration method.

Flat White Board Real Scene 1 Real Scene 2 Before After Before After Before After Mean value 913.98 915.73 559.85 571.83 278.91 305.03 RMSE 168.22 12.89 291.30 221.23 256.65 192.86 PSNR 43.97 55.13 41.58 42.78 42.13 43.37 Sensors 2020, 20, x FOR PEER REVIEW 14 of 23

(a) (b)

Figure 10. Results of lens distortion correction. (a) Before calibration and (b) after calibration.

The internal parameters and the distortion coefficients are shown in Table 2.

Table 2. Lens parameters.

fx fy cx cy Internal Parameters 208.915 209.647 159.404 127.822 k1 k2 p1 p2 Distortion Coefficients −0.37917 0.17410 0.00021 0.00124

4.2.2. DSNU and PRNU The influences of DSNU and PRNU were eliminated based on the integration time simulating method introduced in Section 3.2. Firstly, the raw grayscale images were acquired under multiple ambient light levels through setting the integration time in several values. Then the spatial variances were calculated from the images to estimate the dark signal non-uniformity (DSNU) and photo response non-uniformity (PRNU). At last the influence of DSNU and PRNU were eliminated by a correction algorithm. To evaluate the effectiveness of the method, several experiments were conducted in different scenes. A checkerboard and a flat white board were used as the test scenes to conduct a qualitative analysis and a quantitative analysis. Then gray images were captured under two real scenes to verify Sensorsthe feasibility2020, 20, 7329 of the method. The results are shown in Figure 11. 15 of 22

Sensors 2020, 20, x FOR PEER( aREVIEW) (b) 15 of 23

(c) (d)

(e) (f)

(g) (h)

FigureFigure 11. Results11. Results of grayscale of grayscale image image calibration. calibration. Images Images (a, (a),c, e (c),, g) (e) are and captured (g) are before captured calibration, before while images (bcalibration,, d, f, h) are while captured images after (b), calibration. (d), (f) and (h) are captured after calibration.

The images in the left column were captured before calibration, while the images in the right column were captured after calibration. Figure 11a,b were captured with the checkerboard. Compared with the image before calibration, two types of non-uniformity were compensated. In the raw image, the central area was brighter while the surroundings were darker because of the uneven exposure. Meanwhile, there existed distinct light and dark stripes in vertical. In the images after calibration, these two phenomena were suppressed obviously. To better prove the effectiveness of the method, a quantitative analysis was then conducted. A flat white board was suitable to conduct the analysis because of its good flatness and smoothness. The images are shown in Figure 11c,d. The improvement in visual was consistent with results of the checkerboard. Two types of non-uniformities were compensated obviously. To better verify the improvement, mean value, root mean square error (RMSE) and peak signal to noise ratio (PSNR) [51] were chosen to evaluate the quality of the images, and the results are shown in Table 3. The mean value shows little difference before and after the calibration, which means the method did not change the overall sampling of the grayscale signal. However, the RMSE got a significant reduction after calibration, which indicates the uniformity of the grayscale signal was improved distinctly. In

Sensors 2020, 20, 7329 16 of 22

4.3. Result with Depth Image Calibration Depth image calibration was carried out after grayscale image calibration. The results are shown Sensors 2020, 20, x FOR PEER REVIEW 16 of 22 in FigureSensors 2020 12., 20 In, x the FOR left PEER image, REVIEW which was obtained before calibration, there existed many incorrect16 of even22 invalid data points, while the bright and dark stripes can be observed distinctly. The non-uniformity of non-uniformity of the whole image indicates the depth data was untrustworthy before calibration. the whole image indicates the depth data was untrustworthy before calibration. After utilizing ambient non-uniformityAfter utilizing ofambient the whole light image compensation, indicates the depthdemodulation data was untrustworthyerror correction, before DRNU calibration. error lightAftercompensation compensation, utilizing andambient demodulationtemperature light compensation,compensation, error correction, demodulationthe DRNU quality error oferror compensationthe correction,depth image andDRNU temperatureimproved error compensation,compensationsignificantly. theThe and qualitynumber temperature ofof thenoise depth compensation,points image reduc improveded obviously.the quality significantly. The of confidence the Thedepth number of depthimage of information noiseimproved points reducedsignificantly.was greatly obviously. improved. The Thenumber confidence of noise of points depth reduc informationed obviously. was greatlyThe confidence improved. of depth information was greatly improved.

(a) (b) (a) (b) FigureFigure 12. 12.Results Results of of depth depth image image calibration.calibration. ( aa)) Before Before calibration calibration and and (b ()b after) after calibration. calibration. Figure 12. Results of depth image calibration. (a) Before calibration and (b) after calibration. 4.4.4.4. Ranging Ranging Accuracy Accuracy Verification Verification under under RealReal EnvironmentEnvironment 4.4. Ranging Accuracy Verification under Real Environment ToTo verify verify the the ranging ranging accuracy accuracy andand thethe adaptabilityadaptability of of the the calibration calibration method, method, several several tests tests underunder theTo the verify real real environment environmentthe ranging were accuracywere conducted. conducted. and the Theadaptability test system system of isthe is shown shown calibration in in Figure Figure method, 13. 13 Firstly,. Firstly,several the thetests test test under the real environment were conducted. The test system is shown in Figure 13. Firstly, the test systemsystem was was placed placed in the in darkthe environmentdark environment (the ambient (the ambient light was light about was 0 Lux),about indoor 0 Lux), environment indoor system was placed in the dark environment (the ambient light was about 0 Lux), indoor (aboutenvironment 500 Lux) (about and outdoor 500 Lux) environment and outdoor (about environm 1200 Lux),ent (about respectively. 1200 Lux), Then respectively. the reflecting Then plate the was environment (about 500 Lux) and outdoor environment (about 1200 Lux), respectively. Then the setreflecting as 80% reflectivity, plate was set 50% as reflectivity80% reflectivity, and 20%50% reflectivity,reflectivity and respectively. 20% reflectivity, respectively. reflecting plate was set as 80% reflectivity, 50% reflectivity and 20% reflectivity, respectively.

Figure 13. The test system. FigureFigure 13.13. The test system. system. In each test, the distance between the PMD solid-state array Lidar and the reflecting plate changedInIn each each from test, test, 0.5 the tothe distance 5 mdistance in a between gradient. between the The PMDthe mean PMD solid-state va luesolid-state in ROI array (1000array Lidar pixelsLidar and the inand the reflecting thecentral reflecting plateregion) changedplate was fromchangedrecorded 0.5 to from 5as m the in0.5 ameasured to gradient. 5 m in adistance. Thegradient. mean Then The value mean the in dist ROIvalueance (1000 in errorROI pixels (1000 was in pixelscalculated. the central in the The region)central test region) wasresults recorded wasare asrecordedillustrated the measured as in theFigure distance. measured 14. Then distance. the distance Then the error dist wasance calculated. error was calculated. The test results The test are illustratedresults are in Figureillustrated 14. in Figure 14.

SensorsSensors2020 2020, 20, ,20 7329, x FOR PEER REVIEW 17 of17 22 of 22

(a)

(b)

FigureFigure 14. 14. TheThe test test results. results. (a) Under (a) Under different diff ambienterent ambient conditions conditions and (b) with and targets (b) with of different targets of diffreflectivities.erent reflectivities.

ComparedCompared with with the the method method in in [42 [42],], the the method method proposed proposed in thisin this paper paper eff ectivelyeffectively reduced reduced the the error inerror the distance in the distance range of range 0.5–5 m.of Meanwhile,0.5–5 m. Meanwhil the adaptivitye, the adaptivity under di ffundererent environmentsdifferent environments improved a lot.improved Table4 gives a lot. the Table detailed 4 gives performance the detailed perfor comparisonmance resultscomparison of the results two methods. of the two methods.

TableTable 4. 4.Detailed Detailed performanceperformance comparison results results of of the the two two methods. methods. RMSE ComparisonComparison Items Items Maximal Maximal Error Error (mm) Average Average Error Error (mm) (mm) RMSE (mm) (mm) The proposed method 16.4 8.13 4.47 The proposed method 16.4 8.13 4.47 Reference [42] method 20.5 9.68 5.56 Reference [42] method 20.5 9.68 5.56

The maximal error, average error and RMSE were chosen to compare the detailed performance. From TableThe maximal4, the maximum error, average error error was and reduced RMSE were distinctly chosen into thecompare proposed the detailed method, performance. while the From Table 4, the maximum error was reduced distinctly in the proposed method, while the non-uniformity reduced a lot, too. Though the difference of average error was not distinct as non-uniformity reduced a lot, too. Though the difference of average error was not distinct as other other two indicators, the proposed method had better adaptability. In other words, the proposed was two indicators, the proposed method had better adaptability. In other words, the proposed was more adaptive to multiscene and different reflectivities. more adaptive to multiscene and different reflectivities. The proposed method was compared with several traditional methods as well, and the result is The proposed method was compared with several traditional methods as well, and the result is shownshown in in Table Table5. It5. canIt can be be obviously obviously figured figured out ou thatt that the the proposed proposed method method has has better better performance performance on rangeon range accuracy accuracy compared compared with with methods methods in [13in ,[13,17,2617,26].]. Although Although the the mean mean distance distance error error shows shows no distinctno distinct improvement improvement compared compared with method with inmethod [19], the in proposed [19], the method proposed has bettermethod performance has better on calibration time and scene scope. In addition, the results of experiments expressed that the proposed method had an outstanding performance on adaptability. Sensors 2020, 20, 7329 18 of 22

Table 5. Detailed performance compared with traditional methods.

Distance Error (mm) Calibration Time Scene Scope 900 1100 1300 1700 2100 2500 3000 3500 4000 Lindner et al. [13] 19.4 28.2 21.0 28.9 13.5 17.3 15.9 21.8 26.7 About dozens of minutes About 4 m 0.6 m 0.4 m × × Steiger et al. [17] NaN 3(at 1207) 25 57 NaN NaN NaN NaN About dozens of minutes Not mentioned Schiller et al. [19] (Automatic 7.45 (mean) NaN NaN About dozens of minutes About 3 m 0.6 m 0.4 m feature detection) × × Schiller et al. [19] (Some manual 7.51 (mean) NaN NaN About dozens of minutes About 3 m 0.6 m 0.4 m feature selection) × × Huang et al. [26] 42 23 18 24 46 60 58 76 NaN Not mentioned About 1.5 m 1.5 m 2 m × × 90 s(calculation) The proposed method 3.1 4.4 5.5 7 7.4 8.1 9.8 9.6 12 10 min (calculation, scene About 1.0 m 1.0 m 1.5 m × × setup and initialization) Sensors 2020, 20, 7329 19 of 22

5. Conclusions Toimprove the range accuracy of the PMD solid-state array Lidar, this paper presents a self-adaptive grayscale correlation based depth calibration method (SA-GCDCM) based on electrical analog delay. Based on the characteristic of the PMD solid-state array Lidar, the grayscale image was used to estimate the parameters of ambient light compensation in depth calibration. To obtain uniform and stable grayscale image, an integration time simulating based method was proposed for eliminating the influence of DSNU and PRNU. Combining SA-GCDCM and demodulation error correction, DRNU error compensation and temperature compensation, a comprehensive, multiscene adaptive multifactor calibration model was established. A series of experiments were conducted to verify the ranging accuracy and the adaptability of the method. Compared with the prior work, the maximum error has reduced distinctly, meanwhile the RMSE was reduced as well, indicating the proposed method had better accuracy and adaptability, respectively. Compared with the traditional methods, the proposed method had better performance on range accuracy and calibration time and scene scope. The proposed method was more adaptive to multiscenes with targets of different reflectivities, which significantly improved the ranging accuracy and adaptability of PMD Lidar.

Author Contributions: X.W. and P.S. developed the improved calibration method and procedure for PMD solid-state array Lidar. X.W. and W.Z. established the calibration system and verification test platform, meanwhile, analyzed the experiment results. All authors have read and agreed to the published version of the manuscript. Funding: The research was funded by National Defense Basic Scientific Research Program of China grant number JCKY2019602B010. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Sansoni, G.; Trebeschi, M.; Docchio, F. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, cultural heritage, medicine, and criminal investigation. Sensors 2009, 9, 568–601. [CrossRef][PubMed] 2. Henry, P.; Krainin, M.; Herbst, E.; Ren, X.; Fox, D. RGB-D mapping: Using -style depth cameras for dense 3D modeling of indoor environments. Int. J. Robot. Res. 2012, 31, 647–663. [CrossRef] 3. Okada, K.; Inaba, M.; Inoue, H. Integration of real-time binocular stereo vision and whole body information for dynamic walking navigation of humanoid robot. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, (MFI 2003), Tokyo, Japan, 1 August 2003; pp. 131–136. 4. Scharstein, D.; Szeliski, R. High-accuracy stereo depth maps using structured light. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 1, pp. 195–202. 5. Sun, M.J.; Edgar, M.P.; Gibson, G.M.; Sun, B.; Radwell, N.; Lamb, R.; Padgett, M.J. Single-pixel three-dimensional imaging with time-based depth resolution. Nat. Commun. 2016, 7, 12010. [CrossRef] [PubMed] 6. Cui, Y.; Schuon, S.; Chan, D.; Thrun, S.; Theobalt, C. 3D shape scanning with a time-of-flight camera. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1173–1180. 7. He, Y.; Chen, S.Y. Recent advances in 3D Data acquisition and processing by Time-of-Flight camera. IEEE Access. 2019, 7, 12495–12510. [CrossRef] 8. Horaud, R.; Hansard, M.; Evangelidis, G.; Ménier, C. An overview of depth cameras and range scanners based on time-of-flight technologies. Mach. Vis. Appl. 2016, 27, 1005–1020. [CrossRef] 9. Rueda, H.; Fu, C.; Lau, D.L.; Arce, G.R. Single aperture spectral+ToF compressive camera: Toward hyperspectral+depth imagery. IEEE J. Sel. Top. Signal Process. 2017, 11, 992–1003. [CrossRef] 10. Rueda-Chacon, H.; Florez, J.F.; Lau, D.L.; Arce, G.R. Snapshot compressive ToF+Spectral imaging via optimized color-coded apertures. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 2346–2360. Sensors 2020, 20, 7329 20 of 22

11. Lindner, M.; Schiller, I.; Kolb, A.; Koch, R. Time-of-Flight sensor calibration for accurate range sensing. Comput. Vis. Image Underst. 2010, 114, 1318–1328. [CrossRef] 12. Lindner, M.; Kolb, A. Lateral and depth calibration of PMD-distance sensors. advances in visual computing. In Advance of Visual Computing, Proceedings of the Second International Symposium on Visual Computing, Lake Tahoe, NV, USA, 6–8 November 2006; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2006; Volume 4292, pp. 524–533. 13. Lindner, M.; Kolb, A. Calibration of the intensity-related distance error of the PMD TOF-camera. In Intelligent Robots and Computer Vision XXV: Algorithms, Techniques, and Active Vision, Proceedings of Optics East, Boston, MA, USA, 9–12 September 2007; Casanent, D.P., Hall, E.L., Röning, R., Eds.; SPIE: Bellingham, WA, USA, 2007; Volume 6764, p. 67640W. 14. Lindner, M.; Lambers, M.; Kolb, A. Sub-pixel data fusion and edge-enhanced distance refinement for 2d/3d images. Int. J. Intell. Syst. Technol. Appl. 2008, 5, 344–354. [CrossRef] 15. Kahlmann, T.; Remondino, F.; Ingensand, H. Calibration for increased accuracy of the range imaging camera SwissRanger. In Proceedings of the ISPRS Commission V Symposium “Image Engineering and Vision Metrology”, Dresden, Germany, 25–27 September 2006; Volume 36, pp. 136–141. 16. Kahlmann, T.; Ingensand, H. Calibration and development for increased accuracy of 3D range imaging cameras. J. Appl. Geodesy. 2008, 2, 1–11. [CrossRef] 17. Steiger, O.; Felder, J.; Weiss, S. Calibration of time-of-flight range imaging cameras. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1968–1971. 18. Swadzba, A.; Beuter, N.; Schmidt, J.; Sagerer, G. Tracking objects in 6D for reconstructing static scenes. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–7. 19. Schiller, I.; Beder, C.; Koch, R. Calibration of a PMD-camera using a planar calibration pattern together with a multicamera setup. ISPRS Int. J. Geo-Inf. 2008, 21, 297–302. 20. Fuchs, S.; May, S. Calibration and registration for precise surface reconstruction with time of flight cameras. Int. J. Int. Syst. Technol. App. 2008, 5, 274–284. [CrossRef] 21. Fuchs, S.; Hirzinger, G. Extrinsic and depth calibration of ToF-cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–6. 22. Chiabrando, F.; Chiabrando, R.; Piatti, D.; Rinaudo, F. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera. Sensors 2009, 9, 10080–10096. [CrossRef][PubMed] 23. Christian, B.; Reinhard, K. Calibration of focal length and 3D pose based on the reflectance and depth image of a planar object. Int. J. Intell. Syst. Technol. Appl. 2008, 5, 285–294. 24. Kuhnert, K.D.; Stommel, M. Fusion of stereo-camera and PMD-camera data for real-time suited precise 3D environment reconstruction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 4780–4785. 25. Schmidt, M. Analysis, Modeling and Dynamic Optimization of 3d Time-of-Flight Imaging Systems. Ph.D. Thesis, University of Heidelberg, Heidelberg, Germany, 20 July 2011; pp. 1–158. 26. Huang, T.; Qian, K.; Li, Y. All Pixels Calibration for ToF Camera. In Proceedings of the IOP Conference Series: Earth and Environmental Science, Ordos, China, 28–29 April 2018; IOP Publishing: Bristol, UK, 2018; Volume 170, p. 022164. 27. Ying, H.; Bin, L.; Yu, Z.; Jin, H.; Jun, Y. Depth errors analysis and correction for Time-of-Flight (ToF) cameras. Sensors 2017, 17, 92. 28. Radmer, J.; Fuste, P.M.; Schmidt, H.; Kruger, J. Incident light related distance error study and calibration of the PMD-range imaging camera. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–6. 29. Frank, M.; Plaue, M.; Rapp, H.; Köthe, U.; Jähne, B.; Hamprecht, F.A. Theoretical and experimental error analysis of continuous-wave time-of-flight range cameras. Opt. Eng. 2009, 48, 013602. Sensors 2020, 20, 7329 21 of 22

30. Schiller, I.; Bartczak, B.; Kellner, F.; Kollmann, J.; Koch, R. Increasing realism and supporting content planning for dynamic scenes in a mixed reality system incorporating a Time-of-Flight camera. In Proceedings of the IET 5th European Conference on Visual Media Production (CVMP 2008), London, UK, 26–27 November 2008; IET: Stevenage, UK, 2008; Volume 7, p. 11. [CrossRef] 31. Foix, S.; Alenya, G.; Torras, C. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sens. J. 2011, 11, 1917–1926. [CrossRef] 32. Piatti, D.; Rinaudo, F. SR-4000 and CamCube3.0 time of flight (ToF) cameras: Tests and comparison. Remote Sens. 2012, 4, 1069–1089. [CrossRef] 33. Lee, S. Time-of-flight depth camera accuracy enhancement. Opt. Eng. 2012, 51, 083203. [CrossRef] 34. Lee, C.; Kim, S.Y.; Kwon, Y.M. Depth error compensation for camera fusion system. Opt. Eng. 2013, 52, 073103. [CrossRef] 35. Fürsattel, P.; Placht, S.; Balda, M.; Schaller, C.; Hofmann, H.; Maier, A.; Riess, C. A Comparative error analysis of current Time-of-Flight sensors. IEEE Trans. Comput. Imaging 2017, 2, 27–41. [CrossRef] 36. Karel, W. Integrated range camera calibration using image sequences from hand-held operation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 945–952. 37. Gao, J.; Jia, B.; Zhang, X.; Hu, L. PMD camera calibration based on adaptive bilateral filter. In Proceedings of the 2011 Symposium on Photonics and Optoelectronics (SOPO), Wuhan, China, 16–18 May 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–4. 38. Park, Y.; Yun, S.; Won, C.S.; Cho, K.; Um, K.; Sim, S. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors 2014, 14, 5333–5353. [CrossRef][PubMed] 39. Jung, J.; Lee, J.-Y.; Jeong, Y.; Kweon, I.S. Time-of-flight sensor calibration for a color and depth camera pair. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1501–1513. [CrossRef] 40. Villena-Martínez, V.; Fuster-Guilló, A.; Azorín-López, J.; Saval-Calvo, M.; Mora-Pascual, J.; Garcia-Rodriguez, J.; Garcia-Garcia, A. A quantitative comparison of calibration methods for RGB-D sensors using different technologies. Sensors 2017, 17, 243. [CrossRef] 41. Zeng, Y.; Yu, H.; Dai, H.; Song, S.; Lin, M.; Sun, B.; Jiang, W.; Meng, M.Q. An improved calibration method for a rotating 2D LiDAR system. Sensors 2018, 18, 497. [CrossRef] 42. Zhai, Y.; Song, P.; Chen, X. A fast calibration method for photonic mixer device solid-state array lidars. Sensors 2019, 19, 822. [CrossRef] 43. Fujioka, I.; Ho, Z.; Gu, X.; Koyama, F. Solid State LiDAR with Sensing Distance of over 40m using a VCSEL Beam Scanner. In Proceedings of the 2020 Conference on Lasers and Electro-Optics, San Jose, CA, USA, 10–15 May 2020; OSA Publishing: Washington, DC, USA, 2020; p. SM2M.4. 44. Prafulla, M.; Marshall, T.D.; Zhu, Z.; Sridhar, C.; Eric, P.F.; Blair, M.K.; John, W.M. VCSEL Array for a Depth Camera. U.S. Patent US20150229912A1, 27 September 2016. 45. Seurin, J.F.; Zhou, D.; Xu, G.; Miglo, A.; Li, D.; Chen, T.; Guo, B.; Ghosh, C. High-efficiency VCSEL arrays for illumination and sensing in consumer applications. In Vertical-Cavity Surface-Emitting Lasers XX, Proceedings of the SPIE OPTO, San Francisco, CA, USA, 4 March 2016; SPIE: Bellingham, WA, USA, 2016; Volume 9766, p. 97660D. 46. Tatum, J. VCSEL proliferation. In Vertical-Cavity Surface-Emitting Lasers XI, Proceedings of the Integrated Optoelectronic Devices 2007, San Jose, CA, USA; SPIE: Bellingham, WA, USA, 2007; Volume 6484, p. 648403. 47. Kurtti, S.; Nissinen, J.; Kostamovaara, J. A wide dynamic range CMOS laser radar receiver with a time-domain walk error compensation scheme. IEEE Trans. Circuits Syst. I-Regul. Pap. 2016, 64, 550–561. [CrossRef] 48. Kadambi, A.; Whyte, R.; Bhandari, A.; Streeter, L.; Barsi, C.; Dorrington, A.; Raskar, R. Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles. ACM Trans. Graph. 2013, 32, 167. [CrossRef] 49. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [CrossRef] 50. EMVA. European Machine Vision Association: Downloads. Available online: www.emva.org/standards- technology/emva-1288/emva-standard-1288-downloads/ (accessed on 30 December 2016). Sensors 2020, 20, 7329 22 of 22

51. Pappas, T.N.; Safranek, R.J.; Chen, J. Perceptual Criteria for Image Quality Evaluation. In Handbook of Image and Processing, 2nd ed.; Elsevier: Houston, TX, USA, 2005; pp. 939–959.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).