applied sciences

Article An Automatic 3D Point Cloud Registration Method Based on Biological Vision

Jinbo Liu 1,*, Pengyu Guo 2,* and Xiaoliang Sun 3,*

1 Hypervelocity Aerodynamics Institute, China Aerodynamics Research and Development Center, Mianyang 621000, China 2 National Innovation Institute of Defense Technology, Beijing 100071, China 3 College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073, China * Correspondence: [email protected] (J.L.); [email protected] (P.G.); [email protected] (X.S.)

Abstract: When measuring surface deformation, because the overlap of point clouds before and after deformation is small and the accuracy of the initial value of point cloud registration cannot be guaranteed, traditional point cloud registration methods cannot be applied. In order to solve this problem, a complete solution is proposed, first, by fixing at least three cones to the target. Then, through cone vertices, initial values of the transformation matrix can be calculated. On the basis of this, the point cloud registration can be performed accurately through the iterative closest point (ICP) algorithm using the neighboring point clouds of cone vertices. To improve the automation of this solution, an accurate and automatic point cloud registration method based on biological vision is proposed. First, the three-dimensional (3D) coordinates of cone vertices are obtained through

 multi-view observation, feature detection, data fusion, and shape fitting. In shape fitting, a closed-  form solution of cone vertices is derived on the basis of the quadratic form. Second, a random

Citation: Liu, J.; Guo, P.; Sun, X. An strategy is designed to calculate the initial values of the transformation matrix between two point Automatic 3D Point Cloud Registration clouds. Then, combined with ICP, point cloud registration is realized automatically and precisely. Method Based on Biological Vision. The simulation results showed that, when the intensity of Gaussian noise ranged from 0 to 1 mr Appl. Sci. 2021, 11, 4538. https:// (where mr denotes the average mesh resolution of the models), the rotation and translation errors doi.org/10.3390/app11104538 of point cloud registration were less than 0.1◦ and 1 mr, respectively. Lastly, a camera-projector system to dynamically measure the surface deformation during ablation tests in an arc-heated wind Academic Editors: José tunnel was developed, and the experimental results showed that the measuring precision for surface Luis Rojo-Álvarez and deformation exceeded 0.05 mm when surface deformation was smaller than 4 mm. Athanasios Nikolaidis

Keywords: point cloud registration; biological vision; automatic registration; cone vertices; deforma- Received: 20 April 2021 tion measurement Accepted: 14 May 2021 Published: 16 May 2021

Publisher’s Note: MDPI stays neutral 1. Introduction with regard to jurisdictional claims in published maps and institutional affil- Near-space is a connected region of traditional aeronautics and space. Near-space su- iations. personic vehicles have great potential, but if flown for a long time in an aerothermal environment, the surface of vehicles can be deformed, which causes functional failure. Thus, deformation properties of materials in an aerothermal environment need to be ur- gently explored. An arc-heated wind tunnel is the main device to simulate an aerothermal environment. There are many methods to reconstruct 3D shape data at different times Copyright: © 2021 by the authors. during ablation tests, but the alignment of point clouds is difficult, because the overlap is Licensee MDPI, Basel, Switzerland. This article is an open access article too small. distributed under the terms and Point cloud registration involves calculating a rigid transformation matrix, consist- conditions of the Creative Commons ing of a rotation matrix and a translation vector, to minimize the alignment error between Attribution (CC BY) license (https:// two point clouds, and has been widely used for simultaneous localization and map- creativecommons.org/licenses/by/ ping (SLAM) [1–3], multi-view point cloud registration [4,5], object recognition [6,7], etc. 4.0/). The classical iterative closest point (ICP) algorithm is the most widely used in point cloud

Appl. Sci. 2021, 11, 4538. https://doi.org/10.3390/app11104538 https://www.mdpi.com/journal/applsci Appl. Sci. 2021, 11, 4538 2 of 16

registration [8]. It was proposed by Besel, in 1992, and has been used to solve the regis- tration problem of free-form surfaces. The basic idea is to search a corresponding closest point in a test point cloud for each point in a reference point cloud. According to the set consisting of all the closest points, a transformation matrix is calculated between two point clouds, resulting in a registration error. If the registration error cannot satisfy the stopping criterion, the transformed test point cloud is taken as a new test point cloud, and the above steps are repeated until the stopping criterion is satisfied. In the classical ICP algorithm, the registration error is presented in two forms, i.e., point-to-point and point-to-plane. In general, the ICP algorithm based on the point-to-plane registration error converges at a faster rate [9]. Using the classical ICP algorithm as a foundation, there have been many variants proposed by researchers [10–14]. The key step of these methods involves searching the set of closest points. However, in a real scenario of surface deformation, it is very difficult to accurately determine the closest points without any prior information. Moreover, these methods are very sensitive to the initial values of the transformation matrix. If the quality of initial values is not good enough, these methods easily suffer from local minima. Another way to solve the point cloud registration problem is by using probabilistic methods. The core idea is to transform the registration problem of point clouds into an optimization problem of probability model parameters [15–20]. As compared with ICP and its variants, these methods perform more robustly with respect to noise and , but their computational complexity is high. Similar to ICP methods, these methods can only be used to align point clouds with large overlaps, whereas an initial value of the transformation matrix with high quality is also required. In the application of surface deformation measurement, the overlap between point clouds is small and the quality of initial values cannot be guaranteed. Thus, if the above methods are directly used to align point clouds, the registration error is large. In order to solve the problem of surface deformation measurement, a complete solu- tion is proposed. Figure1 shows the flowchart. Firstly, fix the target to a mounting bracket with at least three cones. Secondly, a cone vertex detection algorithm is deduced to extract feature points. Thirdly, accurately align the point cloud before deformation to the point cloud after deformation using neighboring point clouds of each cone vertices which are not deformed during ablation tests through an automatic registration algorithm. Then, surface deformation can be obtained. This paper is organized as follows: In Section2 , we introduce the detection principles of cone vertices in detail; in Section3, we derive an automatic registration algorithm for point clouds; in Section4, we introduce the research methodology, including cone vertex detection, automatic registration, and surface deforma- tion measurement; in Section5, we present the research results, and in Sections 5.1 and 5.2 we present the accuracy and robustness of the cone vertex detection algorithm and auto-

Appl. Sci. 2021, 11, 4538 matic registration algorithm, respectively; in Section 5.3, we provide the results of3 surface of 17 deformation measurement; in Section6, we discuss the research results; and, in Section7, we conclude with the contributions of this study and next steps in research.

FigureFigure 1. 1.A A flowchart flowchart of of surface surface deformationdeformation me measurementasurement using using the the proposed proposed method. method.

2. Cone Vertex Detection As shown in Figure 2, the perception of geometric features in a 3D scene for human beings relies on multi-view observation and data fusion, which confer the advantages of high detection accuracy and robustness to noise. The detection process is executed in several steps. Firstly, a target is independently observed from different viewpoints. Its 3D point clouds are projected onto the retina, and corresponding 2D images are generated. Secondly, the features of interest in each 2D image are extracted. All 2D feature points are reprojected onto the surface of the target, and the corresponding 3D feature points are obtained. Thirdly, all 3D feature points observed from different viewpoints are fused, and the fake 3D feature points (e.g., fake vertices) are deleted according to the analysis made by the brain. Lastly, the target’s corresponding geometric shape is fitted using the neighboring point cloud of each 3D feature point on the surface of the target in order to improve the precision of the detected 3D feature points.

Figure 2. The principles of cone vertex detection based on biological vision.

2.1. Multi-View Observation Since the measured space is three-dimensional, in order to carry out the all-round observation of a target, three base planes need to be selected as observation planes, for example, XOY, YOZ, and ZOX planes. In each observation plane, the target can be ob- served from different viewpoints. Without loss of generality, the XOY observation plane can be taken as an example, as shown in Figure 3a. Appl. Sci. 2021, 11, 4538 3 of 17

Appl. Sci. 2021, 11, 4538 3 of 16

Figure 1. A flowchart of surface deformation measurement using the proposed method.

2.2. Cone Cone Vertex Vertex DetectionDetection AsAs shown shown inin FigureFigure2 2,, thethe perceptionperception of geometric geometric features features in in a a3D 3D scene scene for for human human beingsbeings reliesrelies onon multi-view observation observation and and data data fusion, fusion, which which confer confer the advantages the advantages of ofhigh high detection detection accuracy accuracy and and robustness robustness to tonoise. noise. The The detection detection process process is executed is executed in in severalseveral steps. steps. Firstly,Firstly, aa targettarget is independently observed observed from from different different viewpoints. viewpoints. Its Its3D 3D pointpoint clouds clouds areare projectedprojected onto the the retina, retina, and and corresponding corresponding 2D 2D images images are are generated. generated. Secondly,Secondly, thethe featuresfeatures of of interest interest in in each each 2D 2D image image are areextracted. extracted. All 2D All feature 2D feature points pointsare arereprojected reprojected onto onto the thesurface surface of the of target, the target, and the and corresponding the corresponding 3D feature 3D feature points pointsare areobtained. obtained. Thirdly, Thirdly, all all3D 3D feature feature points points observed observed from from different different viewpoints viewpoints are arefused, fused, andand the the fake fake 3D3D featurefeature points (e.g., (e.g., fake fake vertices) vertices) are are deleted deleted according according to tothe the analysis analysis mademade by by the the brain.brain. Lastly,Lastly, the target’s corre correspondingsponding geometric geometric shape shape is isfitted fitted using using the the neighboringneighboring pointpoint cloudcloud of each 3D 3D feature feature poin pointt on on the the surface surface of ofthe the target target in inorder order to to improveimprove the the precision precision ofof thethe detected 3D 3D feature feature points. points.

Figure 2. The principles of cone vertex detection based on biological vision. Figure 2. The principles of cone vertex detection based on biological vision.

2.1.2.1. Multi-View Multi-View ObservationObservation Since the measured space is three-dimensional, in order to carry out the all-round Since the measured space is three-dimensional, in order to carry out the all-round observation of a target, three base planes need to be selected as observation planes, for observation of a target, three base planes need to be selected as observation planes, for ex- example, XOY, YOZ, and ZOX planes. In each observation plane, the target can be ob- ample, XOY, YOZ, and ZOX planes. In each observation plane, the target can be observed Appl. Sci. 2021, 11, 4538 served from different viewpoints. Without loss of generality, the XOY observation plane4 of 17 from different viewpoints. Without loss of generality, the XOY observation plane can be can be taken as an example, as shown in Figure 3a. taken as an example, as shown in Figure3a.

(a) (b) (c)

FigureFigure 3.3. Multi-viewMulti-view observation.observation. ( a(a)) Observations Observations from from different different viewpoints; viewpoints; (b ()b image) image binarization binariza- undertion under constraints; constraints; (c) binary (c) binary image. image.

1111==−() At viewpoint I, the observed 3D point is notated as Pxyziiiii(), , 1,2,..., nn 1, , where n is the number of points. Its corresponding 2D projection point on the YOZ plane 111= is notated as piii()xy, .

11= xyii  (1) 11=  yzii

111= The center point of the 2D point cloud is notated as pccc()xy, .

n  1  pi 1 = (2) p = i 1 . c n

The width w1 and height h1 of its corresponding minimum enclosing rectangle (MER) are calculated as follows:

𝑤 max𝑥 min𝑥 ,,...,, ,,...,, . (3) ℎ max𝑦 min𝑦 ,,...,, ,,...,, In the YOZ plane, the imaging process of the retina can be simulated to generate a 1 binary image I with width W1 and height H1 as:

Wkw= 11,1k ≥ v  = , (4) Hkh11

where k is a zoom coefficient. 111= Then, the pixel coordinate of the center point Iccc()xy, of the image can be cal- culated as follows: 1 =−() xWc 1 1/2  . (5) 1 =−() yHc 1 1/2

1 Then, the 2D point cloud p i can be shifted to the image pixel coordinate system as:

1111=−+ pppIiicc , (6)

1 1 1 1 where pi is the corresponding point of p i in image I . If the grayscale of pi is di- rectly set to 255 and that of the other pixels is set to 0, the target area in the binary image becomes only a set of discrete pixels but not a connected domain, which cannot be used to detect corners. Appl. Sci. 2021, 11, 4538 4 of 16

1 1 1 1 At viewpoint I, the observed 3D point is notated as Pi = xi , yi , zi (i = 1, 2, . . . , n − 1, n), where n is the number of points. Its corresponding 2D projection point on the YOZ plane is 1 1 1 notated as pei = xei , yei .  1 1 xei = yi 1 1 (1) yei = zi 1 1 1 The center point of the 2D point cloud is notated as pec = xec , yec .

n 1 ∑ pei = p1 = i 1 . (2) ec n

The width w1 and height h1 of its corresponding minimum enclosing rectangle (MER) are calculated as follows:

 1 1 w1 = max xi − min xi  i=1,2,...,n−1,n e i=1,2,...,n−1,n e 1 1 . (3) h1 = max yi − min yi  i=1,2,...,n−1,n e i=1,2,...,n−1,n e

In the YOZ plane, the imaging process of the retina can be simulated to generate a 1 binary image I with width W1 and height H1 as:

 W = kw v 1 1 , k ≥ 1, (4) H1 = kh1

where k is a zoom coefficient. 1  1 1 Then, the pixel coordinate of the center point Ic = xc , yc of the image can be calculated as follows:  1 xc = (W1 − 1)/2 1 . (5) yc = (H1 − 1)/2 1 Then, the 2D point cloud pei can be shifted to the image pixel coordinate system as:

1 1 1 1 pi = pei − pec + Ic , (6)

1 1 1 1 where pi is the corresponding point of pei in image I . If the grayscale of pi is directly set to 255 and that of the other pixels is set to 0, the target area in the binary image becomes only a set of discrete pixels but not a connected domain, which cannot be used to detect corners. As is shown in Figure3b, A, B, and C are three vertices of an arbitrary triangle patch Tj(j = 1, 2, . . . , t − 1, t) on the target surface, and its corresponding triangle in the image is notated as Tj, where t is the number of triangle patches. Both Tj and Tj are 1 1 1 connected domains. The coordinates of A, B, and C are notated as PA, PB, and PC, re- 1 1 1 spectively. The coordinates of A, B, and C in the image are notated as pA, pB, and pC, 1 1 respectively. Qk is an arbitrary pixel in image I , and its coordinates are notated as qk, where (k = 1, 2, . . . , m − 1, m), and m is the number of pixels in image I1. Then, the follow- ing vectors can be obtained:  = 1 − 1  Qk A pA qk 1 1 QkB = pB − qk . (7)  1 1 QkC = pC − qk

The sum of angles between vectors is notated as βj, as:

βj = ∠AQkB + ∠BQkC + ∠CQk A       × × × (8) = arccos Qk A Qk B + arccos Qk B QkC + arccos QkC Qk A |Qk A||Qk B| |Qk B||QkC| |QkC||Qk A| Appl. Sci. 2021, 11, 4538 5 of 16

◦ If Qk is inside Tj or lying exactly on the border of Tj, then βj must be equal to 360 . On the basis of this, the image binarization under geometric constraints can be executed according to the following equation:   255, i f min βj − 360 = 0 1   j=1,2,...,t−1,t I Qk =  . (9) 0, i f min βj − 360 6= 0  j=1,2,...,t−1,t

1  1 The function I Qk = g means that the grayscale of the pixel Qk in image I is set to g. Figure3c shows the result of image binarization under geometric constraints, where the first observation has been completed. 1 1 1 1 As shown in Figure3a, the point cloud Pi = xi , yi , zi can be rotated by α degrees 2 2 2 2 around the Z-axis; then, the point cloud Pi = xi , yi , zi (i = 1, 2, . . . , n − 1, n) observed at position II can be obtained according to Equation (10) as:

2 1 Pi = RZ(α)Pi . (10)

From Equations (1)–(9), the simulated image I2 at position II can be generated. In the same way, all simulated images Iu(u = 1, 2, . . . , s − 1, s) can be obtained, where s is an integer obtained as: 360 s = . (11) α When the XOY, YOZ, and ZOX planes are taken as the observation planes, then 3s simulated binary images Iu(u = 1, 2, . . . , 3s − 1, 3s) can be achieved in total.

2.2. Cone Vertex Recognition The Harris operator can be used to detect corners in all simulated images Iu(u = 1, 2, u . . . , 3s − 1, 3s). Qv (u = 1, 2, . . . , 3s − 1, 3s)(v = 1, 2, . . . , ru − 1, ru) represents the detected u u v-th corner in image I . Its pixel coordinates are notated as qv , where ru is the number u u of all detected corners in image I . Then, its corresponding point qev in 2D space can be calculated according to Equation (12) as:

u u u u qev = qv − Ic + pec . (12)

u u The closest point p u can be searched for q in the 2D point cloud. ehv ev

u u u hv = min dis(qv , pi ), (13) i=1,2,...,n−1,n e e

u u where h is the index of p u , in 2D point clouds. v ehv Since the corresponding relationship of the points remains unchanged during the projection process from a 3D point cloud to a 2D point cloud, the corresponding point in u 1 the 3D point cloud of q can be written as P u . ev hv n 1 o The clustering of P u is executed on the basis of Euclidean distance according to hv the following steps: (a) η is notated as the number of categories.

3s η = ∑ ru + 1 (14) u=1

u ru is the number of all detected corners in image I , 3s is the number of all simu- lated images. (b) dij represents the distance between the centers of Ci and Cj. Ci and Cj is the i-th  and j-th category. If the minimum of dij is smaller than the distance threshold λ, n 1 o cluster P u into η-1 categories and η = η-1. hv Appl. Sci. 2021, 11, 4538 6 of 17

2.2. Cone Vertex Recognition The Harris operator can be used to detect corners in all simulated images u ()=−u ()()=−=− I uss1,2,...,3 1,3 . Quvuu1,2,...,3 s 1,3 sv 1,2,..., r 1, r represents the detected u u v-th corner in image I . Its pixel coordinates are notated as qv , where ru is the number u u of all detected corners in image I . Then, its corresponding point qv in 2D space can be calculated according to Equation (12) as: uuuu=−+ qqIpvvcc. (12)

u u The closest point p u can be searched for qv in the 2D point cloud. hv

uuu= hdisqpvvimin() , , inn=−1,2,..., 1, (13)

u u where hv is the index of p u , in 2D point clouds. hv Since the corresponding relationship of the points remains unchanged during the projection process from a 3D point cloud to a 2D point cloud, the corresponding point in u 1 the 3D point cloud of qv can be written as P u . hv { 1 } The clustering of P u is executed on the basis of Euclidean distance according to hv the following steps: (a) η is notated as the number of categories.

3s η =+ ru 1 (14) u=1

ru is the number of all detected corners in image Iu, 3s is the number of all simulated im- ages.

(b) dij represents the distance between the centers of Ci and Cj. Ci and Cj is the i-th and Appl. Sci. 2021, 11, 4538 6 of 16 j-th category. If the minimum of {dij} is smaller than the distance threshold λ, { 1 } cluster P u into η-1 categories and η = η-1. hv { } (c)(c) RepeatRepeat step step (b) (b) until until the the minimum minimum of of dijdij isis equal equal to to or or greater greater than than λ.λ . (d)(d) TheThe coordinate coordinate of ofeach each clustering clustering center center can can be be obtained obtained by by calculating calculating the the mean mean valuevalue of ofmembers members in inits its corresponding corresponding category. category. ψψ represents the number of members of C . Since the cone vertex is a robust feature i i represents the number of members of Ci. iSince the cone vertex is a robust feature point, it should be observed from the most viewpoints. Thus, all values of Ci that satisfy point, it should be observed from the most viewpoints. Thus, all values of Ci that satisfy ψ ≤ κ should be deleted, as shown in Figure4c, where κ is a threshold set for the number of ψ ≤i κ should be deleted, as shown in Figure 4c, where κ is a threshold set for the observations.i Figure4d shows the rough localization result of a cone vertex. The detected numbercone vertex of observations. is notated asFigureχ (i =4d1, shows 2, . . . , ρthe− 1,roughρ), where localizationρ is the result number of a ofcone all detectedvertex. i χρρ()=− Thecone detected vertices. cone vertex is notated as i i 1,2,..., 1, , where ρ is the number of all detected cone vertices.

C2 d23 C1 C1 C3 d12 C2

d12≤λ d23≤λ

C2

C1 d12 Appl. Sci. 2021, 11, 4538 7 of 17

{dij}>λ d12≤λ (a) (b)

ψ >κ i

(c) (d)

FigureFigure 4.4. TheThe principlesprinciples of cone vertex recognition. ( (aa)) Corner ; detection; (b (b) )clustering clustering of of feature feature points;points; ( c(c)) data data filtering; filtering; ( d(d)) rough rough localization. localization.

2.3.2.3. ShapeShape FittingFitting χ 1 = WithoutWithout lossloss of of generality, generality,χ1 can1 can be takenbe taken as an as example. an example. In this In case, thisP icase, 1 1 1 x 1111y z (i = − ) Pxyzii ,==−i(), i, ,1,() 2, 1,2,..., . . . , ξ ξξ1, 1,ξ represents represents the the neighboring neighboring pointpoint cloudcloud ofof χχ1,, wherewhere ξξ isiiii the number of neighboring points. The quadratic form of the conic surface1 can be writtenis the number as: of neighboring points. The quadratic form of the conic surface can be 2 written2 as: 2  1  1  1 1 1 1 1 1 1 1 1 1 a1 xi + a2 yi + a3 zi + a4xi yi + a5xi zi + a6yi zi + a7xi + a8yi + a9zi + a10 = 0 (15) 111111111111222+++++++++= ax12()iiiiiiiiiiii ay () az 345678910 () axyaxzayzaxayaza 0 (15) Then, Equation (14) can be rewritten in matrix form, where Then, Equation (14) can be rewritten in matrix form, where Eϕ = 0. (16) Eϕ = 0 . (16) The matrices of E and φ are as follows:

222 ()x111111111111 () y () z xyxzyzxyz1 i i i iiiiiiiii ()111111111111222 () () xi y i z i xyxzyzxyz iiiiiiiii1 E =   222 . ()x111111111111 () y () z xyxzyzxyz1 i i i iiiiiiiii 222 ()x111111111111 () y () z xyxzyzxyz1 i i i iiiiiiiii ϕ = ()T aa12aaaaaaaa345678910

E can be decomposed using a singular value decomposition (SVD) = T EUDVEEE. (17)

Then, φ becomes the last column of VE. The quadratic form of the conic surface can be rewritten in matrix form as:

111 111 T = ()()xyziii110 Fxyz iii , (18)

where

aa14/2 a 5 /2 a 7 /2  aaaa/2 /2 /2 F = 4268 aa/2 /2 aa /2 56 39 aaa789/2 /2 /2 a 10 F can be decomposed using an SVD. Appl. Sci. 2021, 11, 4538 7 of 16

The matrices of E and ϕ are as follows:

 12 12 12 1 1 1 1 1 1 1 1 1  xi yi zi xi yi xi zi yi zi xi yi zi 1  2 2 2   x1 y1 z1 x1y1 x1z1 y1z1 x1 y1 z1 1   i i i i i i i i i i i i   ......  E =  ......     12 12 12 1 1 1 1 1 1 1 1 1   xi yi zi xi yi xi zi yi zi xi yi zi 1  12 12 12 1 1 1 1 1 1 1 1 1 xi yi zi xi yi xi zi yi zi xi yi zi 1 T ϕ = a1 a2 a3 a4 a5 a6 a7 a8 a9 a10

E can be decomposed using a singular value decomposition (SVD)

T E = UEDEVE . (17)

Then, ϕ becomes the last column of VE. The quadratic form of the conic surface can be rewritten in matrix form as:

1 1 1  1 1 1 T xi yi zi 1 F xi yi zi 1 = 0, (18)

where   a1 a4/2 a5/2 a7/2  a /2 a a /2 a /2  F =  4 2 6 8   a5/2 a6/2 a3 a9/2  a7/2 a8/2 a9/2 a10 F can be decomposed using an SVD.

T F = UF DFVF . (19)

The homogeneous coordinates of the cone vertex are represented by the last column of  T 0 VF (see proof in AppendixA) and notated as v1 v2 v3 v4 . Thus, the χ1 coordinates of the cone vertex are as follows:   0 v1 v2 v3 χ1 = , , . (20) v4 v4 v4

0 In the same way, all coordinates of cone vertices χ1 = (xi, yi, zi)(i = 1, 2, . . . , ρ − 1, ρ) can be obtained.

3. Automatic Registration Algorithm 1 0 2 0 χi (i = 1, 2, . . . , ρ1 − 1, ρ1) and χi (i = 1, 2, . . . , ρ2 − 1, ρ2) are notated as the cone vertices in the reference and test point clouds, respectively. Since the corresponding relation- 1 0 2 0 ship is unknown, χi and χi cannot be directly used to calculate the transformation matrix between the reference and test point clouds. The transformation matrix consists of a 3 × 3 rotation matrix R and a 3 × 1 translation vector T. To solve the corresponding relationship and improve the robustness of the algorithm, a random strategy is used to 1 0 2 0 calculate the corresponding relationship between χi and χi . Two sequences of numbers are constructed in which the probability of each element obeys a mean distribution. 1 Γ = [1, 2, . . . , ρ1 − 1, ρ1] 2 Γ = [1, 2, . . . , ρ2 − 1, ρ2] Appl. Sci. 2021, 11, 4538 8 of 17 Appl.Appl. Sci. Sci. 2021 20212021, 11,, 1111, 4538,, 45384538 8 88of ofof 17 1717 Appl. Sci. 2021, 11, 4538 8 of 17

T = T TT FF==UDVUDVF FFT . (19) F= UDVF FF FFFFFF. .. (19)(19)(19) F UDVF F FFFF. (19) TheThe homogeneous homogeneous coordinates coordinates of of the the cone cone vertex vertex are are represented represented by by the the last last column column The homogeneous coordinates of the cone vertex are representedT by the last column []T TT χ ′ ′ ofof V VF F(see (see proof proof in in Appendix Appendix A) A) and and notated notated as as []vvvv[][]1234vvvvT . .Thus, Thus, the the χχ1 ′′coordi- coordi- of VF FF (see(see proofproof inin AppendixAppendix A)A) andand notatednotated asas[] 1234vvvvvvvv12341234.. Thus,Thus, thethe χ ′1 11 coordi-coordi- of VF (see proof in Appendix A) and notated as vvvv12341234. Thus, the 1 coordi- natesnates of of the the cone cone vertex vertex are are as as follows: follows: natesnatesnates ofof of thethe the conecone cone vertexvertex vertex areare are asas as follows:follows: follows:

vv12vvv3v χ′ =′ vv12vv1212v 3v33 χχ1′ ′=′ =vv12,,,,v3 . . (20) χ′1 =11 ,,,,,,. .. (20)(20)(20) 1 vvv444vvv. (20) vvv444vvv444444 vvv444 χρρ′ ==−′ ()( ) In the same way, all coordinates of cone vertices χρρχρρiiii′ ==−′′==−()(xyz()(()(xyz,, , , i i 1,2,..., 1,2,..., 1, 1, ) ) ) InInIn the thethe same samesame way, way,way, all allall coor coorcoordinatesdinates of of cone cone vertices vertices χρρ′iiii==−iiiiiiii()(()(xyz,, , , i 1,2,..., 1,2,..., 1, 1, ) ) In the same way, all coordinates of cone vertices iiiiiiiixyz, , i 1,2,..., 1, cancan be be obtained. obtained. cancancancan bebe bebe obtained.obtained. obtained.obtained. 3.3. Automatic Automatic Registration Registration Algorithm Algorithm 3.3.3. AutomaticAutomatic Automatic RegistrationRegistration Registration AlgorithmAlgorithm Algorithm 1 2 { 1χρρ11′}′()=− { 2χρρ22 ′}′()=− {1{χρρχρρi }′′}()i()()i=−=−1,2,...,1,2,...,11 1, 1, and and {2{χρρχρρi }′′}()i()()i=−=−1,1, 2, 2, ..., ...,22 1, 1, are are notated notated as as the the cone cone ver- ver- { {χρρ′i}ii()}()=−ii 1,2,...,111111 1, andand{ {χρρ′i}ii()}()=−ii 1, 2, ...,222222 1, areare notatednotated asas thethe conecone ver-ver- { i }i i 1,2,...,1111 1, and { i }i i 1, 2, ...,2222 1, are notated as the cone ver- ticesticestices in inin the thethe reference referencereference and andand test testtest point pointpoint clouds cloudsclouds, ,,respectively. respectively.respectively. Since SinceSince the thethe corresponding correspondingcorresponding rela- rela-rela- tices in the reference and1 test point2 clouds, respectively. Since the corresponding rela- { 1χ11′}′ { 2χ22 ′}′ tionshiptionship is is unknown, unknown, {1{χχi′}′′} and and {2{χχi′}′′} cannot cannot be be directly directly used used to to calculate calculate the the trans- trans- tionshiptionshiptionship is isis unknown, unknown,unknown, { {χ′i}ii} and andand { {χ′i}ii} cannot cannotcannot be bebe directly directlydirectly used usedused to toto calculate calculatecalculate the thethe trans- trans-trans- tionship is unknown, { i }i and { i }i cannot be directly used to calculate the trans- formationformation matrix matrix between between the the reference reference and and te testst point point clouds. clouds. The The transformation transformation matrix matrix formationformationformationformation matrixmatrix matrixmatrix betweenbetween betweenbetween thethe thethe referencereference referencereference andand andand tete tetestststst pointpoint pointpoint clouds.clouds. clouds.clouds. TheThe TheThe transformationtransformation transformationtransformation matrixmatrix matrixmatrix consistsconsists of of a a3 3× ×3 3rotation rotation matrix matrix R R and and a a3 3× ×1 1translation translation vector vector T T. .To To solve solve the the corre- corre- consistsconsistsconsistsconsists ofof ofof aa a a33 3 3×× × ×33 3 3rotationrotation rotationrotation matrixmatrix matrixmatrix RR R andand andand aa a a33 3 3×× × ×11 1 1translationtranslation translationtranslation vectorvector vectorvector TT .T. To.To. ToTo solvesolve solvesolve thethe thethe corre-corre- corre-corre- spondingspondingsponding relationship relationshiprelationship and andand impr imprimproveove the the robustness robustness of of the the algorithm, algorithm, a arandom random strategy strategy is is sponding relationship and improve the robustness of the algorithm,1 a random2 strategy is { 1χ11′}′ { 2χ22 ′}′ usedused to to calculate calculate the the corresponding corresponding relationship relationship between between {1{χχi′}′′} and and {2{χχi′}′′}. . usedused to to calculate calculate the the corresponding corresponding relationship relationship between between { {χ′i}ii} and andand { {χ′i}ii}. .. used to calculate the corresponding relationship between { i }i and { i }i . TwoTwo sequences sequences of of numbers numbers are are constructed constructed in in which which the the probability probability of of each each element element TwoTwoTwo sequencessequences sequences ofof of numbersnumbers numbers areare are constructedconstructed constructed inin in whichwhich whichwhich thethe thethe probabilityprobability probabilityprobability ofof ofof eacheach eacheach elementelement elementelement obeysobeys a amean mean distribution. distribution. obeysobeysobeys aa a meanmean mean distribution.distribution. distribution. Appl. Sci. 2021, 11, 4538 1 8 of 16 1Γ11 =−[ ρρ] 1 Γ1Γ=−=−[1,[[1, 2, 2, ..., ...,ρρρρ11 1, 1, ] ]] Γ =−[ [1, 2, ...,ρρ111111 1, ] ] 1, 2, ...,1111 1, . 2 . .. 2Γ22 =−[]ρρ. . 2 Γ2Γ=−=−[]1,2,...,[][]1,2,...,ρρρρ22 1, 1, Γ =−[][]1,2,...,ρρ222222 1, 1,2,...,2222 1, 1 1 222 11 1 2 ThreeThree different different elements elements 1τ11ττ()(()jj===1,1, 2, 2,3 3 )andand 2τ2τ2τ()(j()j ===1,1, 2, 2, 3 3) areare taken taken from from 1Γ11Γand and Γ, ThreeThree different different elements elements 1 τ τjj ()jj ()()j=jj =1,1, 2, 2, 3 3 and andand 2 τ τjjj()j ()()j=jj =1,1, 2, 2, 3 3 are are are taken takentaken from fromfrom 1 ΓΓ and andand ThreeThree different different elements elements j ()jj j 1, 2, 3 and and j ()jj j 1, 2, 3 are are taken taken from from Γ and and 2 j j 2Γ22respectively. The corresponding cone vertices are as follows: 2 Γ2Γ, respectively., respectively. The The corresponding corresponding cone cone vertices vertices are are as as follows: follows: Γ ,, respectively.,respectively., respectively.respectively. TheThe TheThe correspondingcorresponding correspondingcorresponding conecone conecone verticesvertices verticesvertices areare areare asas asas follows:follows: follows:follows: 1χ1 ′ = χ11χ1111′ ′ ==( xyz,,) 1 χ1111ττττ1111′′ =( xyz( xyz,,,,) ) χ′ττττ1111ττττ= xyz,,) ′ jjjj(xyz,, 1111( ) χ 1111jjjjττττττττ=jjjj( xyz,,) 1111( xyz,,) ττττjjjjjjjj jjjjjjjj. . .. 2χ2 ′ = . χ22χ2222′ ′ ==( x ,,yz) 2 χ2222ττττ′2222′ =( x( x ,,yz,,yz) ) χ′ ττττ2222ττττ= x ,,yz) ′ jjjj(x ,,yz 2222( ) χ 2222jjjjττττττττ=jjjj( x ,,yz) 2222( x ,,yz) ττττjjjjjjjj jjjjjjjj

1χ1 ′ 2 χ2 ′ χ11χ1′ ′ χ22χ2′ ′ TheThe transformation transformation between between 1 χ1τ 1′′ and and 2 χ2τ′2′ can can be be written written as as Equation Equation (21) (21) as: as: The transformation between χ ′τ 1j1τ and χ ′ τ2j2τ can be written as Equation (21) as: TheTheThe transformation transformation transformation between between between χ 1′ jττ j and andand χ 2′ jττ j can can be be written written as as Equation Equation (21) (21) as: as: The transformation between 1τ jj and 2τ jj can be written as Equation (21) as: jj jj TT TTTTTT 21χχ21′′TT=× + ( χχ2121χχ21′′′′) =×=×RT( ) + + (21( χχ21ττ′′21′′) ) =×RTRT( ( ) ) +. . (21) χχ′′ττ21ττ)=×RT) + . (21)(21) (( ′′21jj) RT(( ) . ( χχ21jjττττ)jj=×RT( ) + . (21)(21) ( 21) RT( ) . (21) ττjjjj. (21) jjjj

1 1 2 2 1χ1 ′ 2χ2 ′ M11 M22 { χ11χ1′ ′} { χ22χ2′ ′ } The matrices 1 M1M and 2 2MM can be constructed using {1{ χ1τ 1′′}} and {2{ χ2τ′2′}}, respec- TheThe matrices matrices M and and M can can be be constructed constructed using using {χ′τ 1τ } and and {χ′ τ 2τ }, ,respec- respec- M M { 1j } { 2j } TheThe matrices matrices M M andand MM can cancan be bebe constructed constructedconstructed usingusingusing{ χ 1 jτ}τ j andand{ χ 2 jτ}τ j ,, , respec-respec- respec- The matrices M and M can be constructed using { 1 } and { 2 } , respec- The matrices and can be constructed using τ jj and τ jj , respec- jj jj tively,tively,tively, as: as: as: tively,tively,tively,tively, as:as: as:as: 12 12χχ1212′′     12χχχχ1111′′′′′′xyz       2    xyz 222    χχχχ′′1111ττττ11111111ττττxyzxyz     2 τ 2 2 τ   xyz xyz222 τττ 222 222 τττ   χχ1111′′11111111ττττττττ1111xyzxyz   2 1 2 τ τ  1 xyz 222 xyz 111 222 τττ τττ 111  1111ττττ111111111111xyz     2 τ 1 1 1  xyz 222 τττ 111 111 111   111111    2   2 1    111    111111====χχχχ′′′′  2 2 2 = =  2 2 2  = =  11MxyzMxyzMxyzMxyz==χχ1111ττττ1111′′′′ ,   2, =  2  2 τ 2    = 222 τττ 222  .  (22) MxyzMxyz==χχ′′ττττ222211111111ττττ  ,, =   τ 2 2 τ  = τττ 222 222 τττ  . . (22)(22) MxyzMxyz==χχ11112222ττττττττ2222 , =  2 2 τ τ  2 = 222 222 τττ τττ 222 .. (22)(22) 1111ττττ22222222     2 τ 2 2  222 τττ 222 222 .  (22)(22) 122222       2    222    12χχ1212′′     12χχχχ1111′′′′′′xyz     2   xyz 222   χχχχ′′1111ττττ11111111ττττxyzxyz     2 τ 2 2 τ   xyz xyz222 τττ 222 222 τττ   χχ1111′′33331111ττττττττ3333xyzxyz     2 32 τ τ  3 xyz 222 xyz 333222 τττ τττ 333   1111ττττ333333333333xyz       2 τ 3 3 3   xyz 222 τττ 333 333 333    Appl. Sci. 2021, 11, 4538 3333     3   333  9 of 17

TheirTheir center center points points can can be be calculated calculated using using Equation Equation (23) (23) as as follows: follows: TheirTheir center center points points can can be be calculated calculated using using Equation Equation (23) (23) as as follows: follows:

3  1χ′   1τ = j  1M = j 1  c 3  3 . (23)(23)  2χ′ 1τ  j  j=1  2M =  c 3

1 TheThe origin origin of ofthe the coordinate coordinate system system can can be beshifted shifted to tothe the center center points points of ofM1 M andand 2 M2 M as:as:  1 1 1 M = M − Mc 2111=−2 2 . (24)  M = MMM − Mc  c. (24)  222M =−MM Then, there only exists rotation transformationc between 1 M and 2 M, expressed as: Then, there only exists rotation transformation between 1M and 2M , expressed as: 2 M = R × 1 M . (25) 21 M =×RM(). (25) 2 T1  Matrix Ω = MT M can be decomposed using an SVD as follows: Matrix Ω=()21M ()M can be decomposed using an SVD as follows: T Ω = UΩDΩV . (26) Ω= TΩ UDVΩΩΩ. (26) U (i)(i = 1, 2, 3) represents the i-th column of U ; thus, R must be one of the follow- ()(Ω = ) Ω ingUii eightΩ forms:1, 2, 3 represents the i-th column of U Ω ; thus, R must be one of the fol-  R =  U lowing( ) U eight( ) forms:U ( ) VT  R =  −U ( ) −U ( ) U ( ) VT  1 Ω 1 Ω 2 Ω 3 Ω  5 Ω 1 Ω 2 Ω 3 Ω   RU= ()123 U ( )T U ( ) VT  RU=−()123 − U ( ) U (T ) VT R2 = UΩ(1) UΩ1(2)ΩΩ−UΩ(3) VΩ ΩΩR6 = −U5Ω(1) ΩΩΩΩUΩ(2) −UΩ(3) VΩ    T ,    T . = ( ) − ( ) ( ) = T ( ) − ( ) − ( ) T  R3 UΩ 1 RUUΩ=−2()U12Ω 3 U (V )Ω U R () 37 VUΩRU1=−U()Ω122 U (U )Ω −3 UV ()V 3Ω   2 ΩΩ T  ΩΩ  6 ΩΩ ΩΩ T R4 = −UΩ(1)  UΩ(2) UΩ(3) V R8 = ,−UΩ(1) −UΩ(2) −UΩ(3) V . =−() (Ω ) () T =−−() ( ) ()Ω T RU3 ΩΩΩΩ123 U U VRU7 ΩΩ123 U U ΩΩ V The rotation error is defined as follows: RU=−()123 U ( ) UV () T RU=−()123 − U ( ) − U () VT  4 rΩΩ ΩΩ 8 ΩΩ ΩΩ  T trace 2 M − R × 1 M × 2 M − R × 1 M The rotation error is definedi as follows: i   T ei = + ζ p × Ri Ri − 1 . (27) 3 T trace()2121 M−× R() M × M −× R() M ii T (27) eRR=+×−ζ ()1 . i 3 pii The first item in Equation (27) is the data term used to represent the geometric error of rotation estimation; the second item in Equation (27) is the constraint term used to limit the coordinate system to being a right-handed coordinate system; trace(∙) is a func- ζ tion used to calculate the sum of the diagonal elements of the matrix; whereas p is a penalty coefficient. Therefore, = ()  Remin i  Ri  =−×21 TMRMcc. (28)  = () eemin min i  i=1,2,...,7,8 ε < ε The threshold of the rotation error is notated as e . If emin e , it indicates that the cone vertices in 1M and 2M are corresponding points and the solution of rotation and < ε translation is credible. Otherwise, the following steps are repeated until emin e or the number of iterations is greater than a threshold Nmax: (a) take three different elements 1 2 from Γ and Γ , respectively; (b) calculate R, T, and emin according to Equations (22)– (28). Since the probability of each element obeys a mean distribution, the value of Nmax can be calculated using Equation (29) as: Appl. Sci. 2021, 11, 4538 9 of 16

The first item in Equation (27) is the data term used to represent the geometric error of rotation estimation; the second item in Equation (27) is the constraint term used to limit the coordinate system to being a right-handed coordinate system; trace(·) is a function used to calculate the sum of the diagonal elements of the matrix; whereas ζ p is a penalty coefficient. Therefore,   R = min(ei)  Ri 2 1 T = Mc − R × Mc . (28)   emin = min (ei)  i=1,2,...,7,8

The threshold of the rotation error is notated as εe. If emin < εe, it indicates that the cone vertices in 1 M and 2 M are corresponding points and the solution of rotation and translation is credible. Otherwise, the following steps are repeated until emin < εe or the Appl. Sci. 2021, 11, 4538 number of iterations is greater than a threshold N : (a) take three different elements10 of 17 max 1 2 from Γ and Γ, respectively; (b) calculate R, T, and emin according to Equations (22)–(28). Since the probability of each element obeys a mean distribution, the value of Nmax can be calculated using Equation (29) as: 33 AAρρ ρρ()()()() ρ−−1212 ρ ρ −− ρ 12 12 1 1 2 2 N 3==3 . (29) maxA A 3 ( − )( − )( − )( − ) ρ1 ρ2A3 ρ1ρ2 ρ1 1 ρ1 26 ρ2 1 ρ2 2 Nmax = = . (29) A3 6 3 1χ′ 2χ′ The corresponding point between { i } and { i } is notated as 1 0 2 0 1 0 2 0 12 12 ( = { χχˆ′′The,1,2,...,1,ˆ corresponding}()inn=− point;between thus, { χχˆ′′,χiˆ }and shouldχi satisfyis notated Equation as χˆ (29)i, χˆ ias: i 1, 2, . . . , ii1 0 c2 0 c ii nc − 1, nc); thus, χˆi, χˆi should satisfy Equation (29) as: 12T RT×+−≤()χχTˆˆ′′δ , 1 0 iid2 0 (30) kR × χˆi + T − χˆik 2 ≤ δd, (30) 2 where δ is the distance threshold. where δ isd the distance threshold. n{1d o}()=− {1χ′} 1 Pjj 1, 2., ..., n11 1, n represents the neighboring point cloud of 1 ˆ0i in the Pj (j = 1, 2, . . . , n1 − 1, n1) represents the neighboring point cloud of χˆi in the { 2 }()=− reference point cloud. n2Pjj o 1, 2., ..., n22 1, n represents the neighboring point cloud reference point cloud. Pj (j = 1, 2, . . . , n2 − 1, n2) represents the neighboring point 2χ′ of { ˆi } 2 in0 the test point cloud. On the above basis, taking the solution of Equation (28) cloud of χˆi in the test point cloud. On the above basis, taking the solution of Equation (28)   asas the the initial initial value, value, the the rotation rotation matrix matrixRe Rand and translation translation vector vectorTe can T be can solved be solved using using the ICPthe algorithm.ICP algorithm. Figure Figure5 shows 5 shows the flowchart the flowchar of thet of automatic the automatic registration registration algorithm. algorithm.

FigureFigure 5. 5.Flowchart Flowchart of of the the automatic automatic registration registration algorithm. algorithm.

4.4. Research Research Methodology Methodology 4.1.4.1. Cone Cone Vertex Vertex Detection Detection The algorithm’s robustness and accuracy can be evaluated by the detection rate SD and The algorithm’s robustness and accuracy can be evaluated by the detection rate SD location error ED under Gaussian noise with different intensities. SD is defined as follows: and location error ED under Gaussian noise with different intensities. SD is defined as follows: N S = D , (31) D N TN = D SD , (31) NT

where N D is the number of detected cone vertices and NT is the total number of cone

vertice. ED is defined as follows:

ND 2  Ei = i=1 (32) ED , N D

where Ei represents the location error of the i-th cone vertex. A cone model provided by [21] was adopted to test the algorithm performance. To test the influence of noise intensity on the detection of cone vertices, Gaussian noise was independently added to the X-, Y-, and Z-axes of each scene point. The standard devia- tion σGN of Gaussian noise ranged from 0 to 1 mr with a step size of 0.1 mr, where mr denotes the average mesh resolution of the models. Appl. Sci. 2021, 11, 4538 10 of 16

Appl. Sci. 2021, 11, 4538 11 of 17

where ND is the number of detected cone vertices and NT is the total number of cone vertice. E is defined as follows: 4.2. AutomaticD Registration v As shown in Figure 6, the reference andu teNstD point clouds consisted of five cones. u 2 There exists a rigid transformation between theu ∑referenceEi and test point clouds, as ex- t i=1 pressed in Equation (21). The ideal rotationED = matrix and, translation vector are notated as (32) ND Rˆ and Tˆ , respectively. The calculated rotation matrix and translation vector are no- tatedwhere as REi andrepresents T , respectively. the location Rotation error of theandi -thtransl coneation vertex. errors under Gaussian noise with differentA cone intensities model provided were used by [21 to] wasevaluate adopted the algorithm’s to test the algorithm robustness performance. and accuracy. To test the influence ofε noise intensity on the detection of cone vertices, Gaussian noise was The rotation error r is defined as follows: independently added to the X-, Y-, and Z-axes of each scene point. The standard deviation σGN of Gaussian noise ranged from 0 totrace 1 mr() RRwith ˆ T − a1 step size of 0.1 mr, where mr denotes ε = 180 the average mesh resolutionr of thearccos models. . (33) 2π  4.2. Automatic Registration ε TheAs translation shown in error Figure T6 , is the defined reference as follows: and test point clouds consisted of five cones. There exists a rigid transformation between the reference and test point clouds, as expressed TT − ˆ in Equation (21). The ideal rotation matrixε = and translation vector are notated as Rˆ and Tˆ, T , (34) respectively. The calculated rotation matrixd andres translation vector are notated as Re and Te, respectively. Rotation and translation errors under Gaussian noise with different intensities where dres is equal to the average mesh resolution (dres = 1 mr). To test the influence of were used to evaluate the algorithm’s robustness and accuracy. The rotation error ε is Gaussian noise with different intensities on registration performance, Gaussian noise wasr defined as follows:   independently added to the X-, Y-, and Z-axes of eachˆ T scene point. The standard devia- trace ReR − 1 180 tion and step size were the sameε = asarccos in Section 4.1. In the experiment,. the corresponding (33) r   π Euler angle of Rˆ was [40°, 40°, 40°], and the rotation2 sequence was “XYZ”. Tˆ was [237 mr, 166 mr, −144 mr].

(a) (b)

FigureFigure 6. Point 6. Point cloud cloud registration. registration. (a) Reference (a) Reference point point cloud; cloud; (b) test (b) testpoint point cloud. cloud.

4.3. SurfaceThe Deformation translation Measurement error εT is defined as follows: A camera-projector system based on the proposed method, shown in Figure 7a, was kTe − Tˆ k developed to measure the surface deformationεT = dynamically, during ablation tests in an (34) arc-heated wind tunnel. The resolutions of the projectordres and camera images were 1280 ×

800where px andd res2456is equal× 2058 to px, the respectively. average mesh The resolution sizes of CCD (dres and= 1 mr).CCD Topixel test were the influence2/3” and of 3.45Gaussian × 3.45 μm, noise respectively. with different The intensitiesfocal length on was registration 75 mm. performance,The data proc Gaussianessing platform noise was comprisedindependently a Dell laptop added with to the an X-, Intel(R) Y-, and Co Z-axesre(TM) of i7-6700HQ each scene point.CPU @ The 2.6 standardGHz and deviation16 GB RAM.and The step aim size was were to theevaluate same the as in system’s Section prec4.1.ision. In the Firstly, experiment, a model the with corresponding the size of Euler40 × 40angle × 5 mm of Rˆ waswas fixed [40◦, to 40 a◦ ,special 40◦], and device the which rotation had sequence six cones, was as “XYZ”.shown inTˆ Figurewas [237 7b. mr, Secondly,166 mr, the−144 device mr]. was placed on a translation stage, as shown in Figure 7c. The trans- lation stage’s precision was 0.01 mm. Thirdly, surface ablation could be simulated by moving4.3. Surface the model Deformation on the Measurementtranslation stage. The camera-projector system was used to reconstructA camera-projector the model surface system and the based six cones on the at time proposed 0 and method,time k, denoting shown inthe Figure refer-7a, encewas point developed cloud and to test measure point cloud, the surface respec deformationtively. Fourthly, dynamically the reference during and ablation test point tests cloudsin an were arc-heated aligned windusing tunnel. the proposed The resolutions registration of themethod, projector and andthen camera the model’s images sur- were face1280 deformation× 800 px couldand 2456 be measured.× 2058 px, Lastly, respectively. the measurement The sizes ofresult CCD was and compared CCD pixel with were the 2/3”reading and 3.45from× 3.45the µtranslationm, respectively. stage, The and focal the length measurement was 75 mm. error The of data the processing cam- era-projector system was calculated. Root-mean-square (RMS) was introduced to evalu- ate the above measurement error. Appl. Sci. 2021, 11, 4538 11 of 16

platform comprised a Dell laptop with an Intel(R) Core(TM) i7-6700HQ CPU @ 2.6 GHz and 16 GB RAM. The aim was to evaluate the system’s precision. Firstly, a model with the size of 40 × 40 × 5 mm was fixed to a special device which had six cones, as shown in Figure7b. Secondly, the device was placed on a translation stage, as shown in Figure7c. The translation stage’s precision was 0.01 mm. Thirdly, surface ablation could be simulated by moving the model on the translation stage. The camera-projector system was used to reconstruct the model surface and the six cones at time 0 and time k, denoting the refer- ence point cloud and test point cloud, respectively. Fourthly, the reference and test point clouds were aligned using the proposed registration method, and then the model’s surface deformation could be measured. Lastly, the measurement result was compared with the Appl.Appl. Sci. Sci. 2021 2021, ,11 11, ,4538 4538 reading from the translation stage, and the measurement error of the camera-projector1212 of of 17 17 system was calculated. Root-mean-square (RMS) was introduced to evaluate the above measurement error.

(a(a) ) (b(b) ) (c(c) ) Figure 7. Experimental equipment. (a) Camera-projector system; (b) Model and special device; (c) Translation stage. Figure 7. ExperimentalExperimental equipment. equipment. ( (aa)) Camera-projector Camera-projector system; system; ( (b)) Model Model and and special special device; ( c) Translation stage.

5.5.5. Research ResearchResearch Results ResultsResults 5.1.5.1.5.1. Cone ConeCone Vertex VertexVertex Detection DetectionDetection FigureFigure8 8 8shows shows shows thethe the pointpoint point cloudsclouds clouds ofof of conecone cone models modelsmodels wherewhere where GaussianGaussian Gaussian noisenoise noise waswas was addedadded added withwith differentdifferent intensities.intensities. Here, Here,Here, “+” “+”“+” represents represents the thethe detected detecteddetected position positionposition of ofof the thethe cone conecone vertex. vertex.vertex. TableTable1 1 1shows shows shows thethe the statisticalstatistical statistical resultsresults results ofof of conecone cone vertex vertex vertex detection. detection. detection.

(a(a) ) (b(b) ) (c(c) )

GN GN GN FigureFigureFigure 8. 8. The TheThe detection detectiondetection results. results.results. ( (a(aa)) )σ σσGNGN = == 0 0 0mr; mr; mr; ( b( (bb) )σ) σGNσGN = = 0.5 =0.5 0.5 mr; mr; mr; ( c(c)( )cσ σ)GNσ GN= = 1 1 =mr. mr. 1 mr.

TableTable 1. 1. The The statistical statistical Tableresults results 1. of ofThe cone cone statistical vertex vertex detection resultsdetection of (mr cone(mr denotes denotes vertex the detectionthe average average (mr mesh mesh denotes resolution resolution the average of of the the models). meshmodels). resolution of the models). NoiseNoise Intensity Intensity (mr) (mr) Location Location Error Error (mr) (mr) Detection Detection Rate Rate 0.00.0 Noise Intensity (mr)0.000.00 Location Error (mr) Detection100%100% Rate 0.10.1 0.00.160.16 0.00100%100% 100% 0.20.2 0.10.270.27 0.16100%100% 100% 0.30.3 0.20.450.45 0.27100%100% 100% 0.3 0.45 100% 0.40.4 0.750.75 100%100% 0.50.5 1.241.24 100%100% 0.60.6 1.561.56 100%100% 0.70.7 1.851.85 100%100% 0.80.8 2.292.29 100%100% 0.90.9 2.582.58 100%100% 1.01.0 3.133.13 97%97%

5.2.5.2. Automatic Automatic Registration Registration FiguresFigures 99 andand 1010 showshow thethe referencereference andand testtest pointpoint clouds,clouds, respectively,respectively, wherewhere GaussianGaussian noisenoise waswas addedadded withwith differentdifferent inteintensities.nsities. Here,Here, “+”“+” representsrepresents thethe detecteddetected positionposition of of the the cone cone vertex. vertex. Figure Figure 11 11 shows shows the the relationship relationship between between registration registration error error andand noise noise intensity. intensity. Appl. Sci. 2021, 11, 4538 12 of 16

Table 1. Cont.

Noise Intensity (mr) Location Error (mr) Detection Rate 0.4 0.75 100% 0.5 1.24 100% 0.6 1.56 100% 0.7 1.85 100% 0.8 2.29 100% 0.9 2.58 100% 1.0 3.13 97%

5.2. Automatic Registration Appl. Sci. 2021, 11, 4538 Figures9 and 10 show the reference and test point clouds, respectively, where Gaussian13 of 17 Appl.Appl. Sci. Sci. 2021 2021, ,11 11, ,4538 4538 1313 of of 17 17 noise was added with different intensities. Here, “+” represents the detected position of the cone vertex. Figure 11 shows the relationship between registration error and noise intensity.

(a) (b) ((aa)) ((bb))

FigureFigure 9.9. ReferenceReference point point cloud. cloud. (a) (σaGN) σ=GN 0.5 = 0.5 mr; mr; (b) σ(GNb) σ=GN 1 mr.= 1 mr. FigureFigure 9. 9. Reference Reference point point cloud. cloud. ( (aa)) σ σGNGN = = 0.5 0.5 mr; mr; ( (bb)) σ σGNGN = = 1 1 mr. mr.

(a) (b) ((aa)) ((bb)) Figure 10. Test point cloud. (a) σGN = 0.5 mr; (b) σGN = 1 mr. FigureFigureFigure 10. 10. 10. Test TestTest point point point cloud. cloud. cloud. ( a( (a)a ))σ σGNσGNGN = = 0.5 =0.5 0.5 mr; mr; mr; ( b(b) ( )bσ σ)GNGNσ GN= = 1 1 =mr. mr. 1 mr.

(a) (b) ((aa)) ((bb)) Figure 11. The relationship between registrati registrationon error error and and noise noise intensity. intensity. ( (aa)) Rotation Rotation error; error; ( (bb)) translation translation error. error. Note, Note, FigureFigure 11. 11. The The relationship relationship between between registrati registrationon error error and and noise noise intensity. intensity. ( a(a) )Rotation Rotation error; error; ( b(b) )translation translation error. error. Note,Note, mr denotes the average mesh resolution of the models. mrmr denotes denotes the the average average mesh mesh resolution resolution of of the the models. models. 5.3. Surface Deformation Measurement 5.3.5.3. Surface Surface Deformation Deformation Measurement Measurement Figure 12 shows the model’s surface deformation at different times. Table 2 shows Figure 12 shows the model’s surface deformation at different times. Table 2 shows the comparisonFigure 12 shows results the of model’sreadings surface and measurement deformation results. at different times. Table 2 shows thethe comparison comparison results results of of readings readings and and measurement measurement results. results. Appl. Sci. 2021, 11, 4538 13 of 16

Appl. Sci. 2021, 11, 4538 14 of 17 5.3. Surface Deformation Measurement Figure 12 shows the model’s surface deformation at different times. Table2 shows the comparison results of readings and measurement results.

(a) (b)

FigureFigure 12. 12. SurfaceSurface deformation deformation measurement measurement results. results. (a) 0s, (a) reading 0s, reading = 0.000 = mm; 0.000 (b) mm; 9s, reading (b) 9s, = reading1.810 =mm. 1.810 mm.

Table 2. Statistical results.Table 2. Statistical results.

Time (s) ReadingTime (mm) (s) Reading (mm) Measurement Measurement Result (mm) Result (mm) Error Erro (mm)r (mm) 0.0 0.000 0.035 0.035 0.0 0.000 0.035 0.035 0.5 0.1000.5 0.100 0.109 0.109 0.009 0.009 1.0 0.2001.0 0.200 0.196 0.196 0.004 0.004 1.5 0.3001.5 0.300 0.300 0.300 0.000 0.000 2.0 0.4002.0 0.400 0.402 0.402 0.002 0.002 2.5 0.5002.5 0.500 0.492 0.492 0.008 0.008 3.0 0.6003.0 0.600 0.605 0.605 0.005 0.005 3.5 0.7003.5 0.700 0.703 0.703 0.003 0.003 4.0 0.8004.0 0.800 0.795 0.795 0.005 0.005 4.5 0.900 0.903 0.003 4.5 0.900 0.903 0.003 5.0 1.000 1.008 0.008 5.0 1.000 1.008 0.008 5.5 1.100 1.106 0.006 5.5 1.1006.0 1.200 1.106 1.210 0.010 0.006 6.0 1.2006.5 1.300 1.210 1.320 0.020 0.010 6.5 1.3007.0 1.400 1.320 1.433 0.033 0.020 7.0 1.4007.5 1.500 1.433 1.522 0.022 0.033 7.5 1.5008.0 1.600 1.522 1.620 0.020 0.022 8.0 1.6008.5 1.710 1.620 1.742 0.032 0.020 8.5 1.7109.0 1.810 1.742 1.843 0.033 0.032 9.5 1.910 1.952 0.042 9.0 1.810 1.843 0.033 10.0 2.010 2.056 0.046 9.5 1.910 1.952 0.042 12.5 2.500 2.537 0.037 10.0 2.01015.0 3.000 2.056 3.029 0.029 0.046 12.5 2.50020.0 4.000 2.537 4.035 0.035 0.037 15.0 3.000 3.029 0.029 20.0 4.000 4.035 0.035 6. Discussion

(1)6. DiscussionAs shown in Figure8c, when σGN = 1 mr, the conic surface was very rough, but its vertex could still be detected accurately, which fully proves the robustness of the GN (1)algorithmAs shown to in Gaussian Figure 8c, noise. when Table σ 1 = shows 1 mr, the conic statistical surface results was ofvery the rough, detection but its ratevertex and locationcould still error be detected under Gaussian accurately, noise which with fully different proves intensities. the robustness It could of the be al- seengorithm that theto Gaussian location errornoise. increased Table 1 shows with increasingthe statistical noise results intensity. of the detection In general, rate detectionand location of the error cone vertexunder wasGaussian successful noise ifwith the locationdifferent error intensities. was lower It could than be 5 mr. seen Thus,that thethe algorithmlocation error can increased maintain awith high increasing detection noise rate underintensity. Gaussian In general, noise detection with intensityof the cone ranging vertex from was 0 tosuccessful 1 mr. if the location error was lower than 5 mr. Thus, the (2) Figuresalgorithm9–11, indicatecan maintain that the a high rotation detection and translation rate under errors Gaussian increase noise with with increasing intensity ranging from 0 to 1 mr. noise intensity. When σGN = 1 mr, most details on the conic surfaces were lost, but the (2) Figures 9–11 indicate that the rotation and translation errors increase with increasing noise intensity. When σGN = 1 mr, most details on the conic surfaces were lost, but the point clouds could still be aligned accurately, which proves the robustness of the al- Appl. Sci. 2021, 11, 4538 14 of 16

point clouds could still be aligned accurately, which proves the robustness of the algorithm to Gaussian noise. In general, the point cloud registration was successful if the rotation error was lower than 5◦ and the translation error was lower than 5 mr. Thus, the algorithm performs well under Gaussian noise with an intensity ranging from 0 to 1 mr. (3) In general, surface deformation of a near-space supersonic vehicle is no more than 4 mm during ablation tests in an arc-heated wind tunnel. Table2 shows the statistical results, where it can be seen that the precision of surface deformation measurement exceeds 0.05 mm when surface deformation is smaller than 4 mm.

7. Conclusions In order to solve the problem of surface deformation measurement during ablation tests in an arc-heated wind tunnel, in this study, we proposed an automatic point cloud registration method. As compared with other registration methods, we provided a com- plete solution, including two parts: (1) Guarantee high-quality initial values and overlaps for aligning point clouds which have deformed much. (2) A strategy to automatically compute rigid transformations between point clouds. Inspired by 2D artificial targets used to improve precision of camera calibration or photogrammetry, we introduced 3D artificial targets to obtain accurate registration results, which was the key idea for solving the prob- lem of surface deformation measurement; most state-of-art approaches only considered how to align point clouds more accurately and robustly using a big enough overlap. Simu- lations and experiments were conducted, and the research results indicated the following: (1) The proposed method performed well under Gaussian noise with an intensity ranging ◦ from 0 to 1 mr. When σGN = 1 mr, rotation and translation error were smaller than 0.025 and 1 mr, respectively. (2) The error of surface deformation measurement was smaller than 0.05 mm when deformation was no more than 4 mm. In addition to surface deformation measurement, the proposed method can also be applied for experimental studying of soft matter. However, there are still some aspects that need to be studied: (1) The procedure of cone vertex detection should be more efficient. As compared with the Harris operator used in cone vertex detection, Radon or Hough transform may be more simplified [22,23], but the robustness needs to be evaluated. (2) The 3D artificial target should be more multiple. An artificial neutral network (ANN) can be adopted to train a classifier. And rotational projection statistics (RoPS) can be taken as the inputs of the classifier [6]. The classifier can be used to recognize different geometric features and delete fake features, which can improve the efficiency of data fusion and decrease the time cost.

Author Contributions: Conceptualization, J.L.; Data curation, P.G.; Formal analysis, J.L. and X.S.; Funding acquisition, J.L.; Investigation, J.L. and P.G.; Methodology, J.L.; Supervision, X.S.; Validation, X.S.; Writing—original draft, J.L.; Writing—review and editing, P.G. and X.S. All authors have read and agreed to the published version of the manuscript. Funding: This work was funded by the National Natural Science Foundation of China with grant no. 11802321. Conflicts of Interest: The authors declare no conflict of interest.

Appendix A

Lemma A1. The quadratic form of the conic surface can be written as:

PT AP = 0, (A1) Appl. Sci. 2021, 11, 4538 15 of 16

and:   a1 a4/2 a5/2 a7/2  a4/2 a2 a6/2 a8/2   T A =  , P = x y z 1 .  a5/2 a6/2 a3 a9/2  a7/2 a8/2 a9/2 a10 A can be decomposed using SVD.

T A = UADAVA. (A2)

Then, the last column of VA represents the homogeneous coordinate of the cone vertex.

Proof. Matrix A can be diagonalized on the basis of eigenvalues and eigenvectors.

A = RDRT, (A3)

where D is a diagonal matrix and R is a transformation matrix. Substituting Equation (A3) into Equation (34) yields:

   T T PT AP = PTR D PTR = P0 DP0 = 0, (A4)

where (P0)TDP0 = 0 is the standard quadratic form of the conic surface. Equation (A4) is essentially a transformation between coordinate systems. Through this transformation, the cone vertex can be shifted to the origin of the new coordinate system, and the axis of the cone can be made parallel to the Z-axis of new coordinate system. The homogeneous coordinates of the cone vertex in the new coordinate system can be notated as:

0  T Pv = 0 0 0 1 . (A5)

Because P0 = RTP, the homogeneous coordinates of the cone vertex in the original coordinate system can be calculated using Equation (A6) as follows:

−1  T 0 Pv = R P v. (A6)

A is a real symmetric matrix; thus, RTR = λI. I is a 4 × 4 unit matrix; therefore,

AT A = RDRTRDRT = λRD2RT. (A7)

R is a matrix consisting of eigenvectors of AT A. A can be decomposed using SVD.

T A = UADAVA. (A8)

Thus, R = VA. (A9) Substituting Equation (A9) into Equation (A6) yields

 0  −1  T 0  0  Pv = R P v = V  . (A10) A 0  1

Therefore, the homogeneous coordinates of the cone vertex become the last column of VA. The lemma has been proven.  Appl. Sci. 2021, 11, 4538 16 of 16

References 1. Nguyen, T.H.; Nguyen, T.M.; Xie, L. Tightly-coupled ultra-wide band-aided monocular visual SLAM with degenerate anchor configurations. Auton. Robot. 2020, 44, 1519–1534. [CrossRef] 2. Shane, G.W.; Voorhies, R.C.; Laurent, I. Efficient velodyne SLAM with point and plane features. Auton. Robot. 2018, 43, 1207–1224. 3. Wang, F.R.; Lu, E.; Wang, Y.; Qiu, G.J.; Lu, H.Z. Efficient stereo visual simultaneous localization and mapping for an autonomous unmanned forklift in an unstructured warehouse. Appl. Sci. 2020, 10, 698. [CrossRef] 4. Lei, H.; Jiang, G.; Quan, L. Fast descriptors and correspondence propagation for robust global point cloud registration. IEEE Trans. Image Process. 2017, 26, 1. [CrossRef][PubMed] 5. Dong, Z.; Liang, F.; Yang, B. Registration of large-scale terrestrical laser scanner point clouds: A review and benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 163, 327–342. [CrossRef] 6. Guo, Y.; Sohel, F.; Bennamoun, M.; Lu, M.; Wan, J. Rotational projection statistics for 3D local surface description and object recognition. Int. J. Comput. Vis. 2013, 105, 63–86. [CrossRef] 7. Tran, D.-S.; Ho, N.-H.; Yang, H.-J.; Baek, E.-T.; Kim, S.-H.; Lee, G. Real-time hand gesture spotting and recognition using RGB-D Camera and 3D convolutional neural network. Appl. Sci. 2020, 10, 722. [CrossRef] 8. Besl, P.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [CrossRef] 9. Servos, J.; Waslander, S.L. Multi-Channel Generalized-ICP: A robust framework for multi-channel scan registration. Robot. Auton. Syst. 2017, 87, 247–257. [CrossRef] 10. Serafin, J.; Grisetti, G. NICP: Dense normal based point cloud registration. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015; pp. 742–749. 11. Attia, M.; Slama, Y. Efficient initial guess determination based on 3D point cloud projection for ICP algorithms. In Proceedings of the 2017 International Conference on High Performance Computing & Simulation (HPCS) 2017, Genoa, Italy, 17–21 July 2017; pp. 807–814. 12. Makovetskii, A.; Voronin, S.; Kober, V.; Tihonkih, D. An efficient point-to-plane registration algorithm for affine transformation. In Applications of Digital Image Processing XL; SPIE: Bellingham, WA, USA, 2017. 13. Li, P.; Wang, R.; Wang, Y.; Tao, W. Evaluation of the ICP algorithm in 3D point cloud registration. IEEE Access 2020, 8, 68030–68048. [CrossRef] 14. Wu, P.; Li, W.; Yan, M. 3D scene reconstruction based on improved ICP algorithm. Microprocess. Microsystems 2020, 75, 103064. [CrossRef] 15. Kang, Z.; Yang, J. A probabilistic graphical model for the classification of mobile point clouds. ISPRS J. Photogramm. Remote Sens. 2018, 143, 108–123. [CrossRef] 16. Hong, H.; Lee, B.H. Probabilistic normal distributions transform representation for accurate 3D point cloud registration. In Pro- ceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017, Vancouver, BC, Canada, 24–28 September 2017; pp. 3333–3338. [CrossRef] 17. Zhu, H.; Guo, B.; Zou, K.; Li, Y.; Yuen, K.-V.; Mihaylova, L.; Leung, H. A review of point set registration: From pairwise registration to groupwise registration. Sensors 2019, 19, 1191. [CrossRef][PubMed] 18. Lu, M.; Zhao, J.; Guo, Y.; Ma, Y. Accelerated coherent point drift for automatic 3D point cloud registration. IEEE GRSL 2016, 13, 162–166. 19. Wang, G.; Chen, Y. Fuzzy correspondences guided Gaussian for point set registration. Knowl. Based Syst. 2017, 136, 200–209. [CrossRef] 20. Ma, J.; Jiang, J.; Liu, C.; Li, Y. Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal . Inf. Sci. 2017, 417, 128–142. [CrossRef] 21. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D ShapeNets: A deep representation for volumetric shapes. In Proceedings of the 2015 IEEE Conference on and , Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. 22. Polyakova, A.P.; Svetov, I.E. On a singular value decomposition of the normal Radon transform operator acting on 3D 2-tensor fields. J. Phys. Conf. Ser. 2021, 1715, 012041. [CrossRef] 23. Wang, Y.S.; Qi, Y.; Man, Y. An improved hough transform method for detecting forward vehicle and lane in road. J. Phys. Conf. Ser. 2021, 1757, 012082. [CrossRef]