applied

Article Adaptive Multi-View Image Mosaic Method for Conveyor Belt Surface Fault Online Detection

Rui Gao 1,2,3, Changyun Miao 1,3,* and Xianguo Li 1,3,*

1 School of Mechanical , , 300387, ; [email protected] 2 School of Control and Mechanical Engineering, Tianjin Chengjian University, Tianjin 300384, China 3 Tianjin Photoelectric Detection Technology and System Key Laboratory, Tiangong University, Tianjin 300387, China * Correspondence: [email protected] (C.M.); [email protected] (X.L.)

Abstract: In order to improve the accuracy and real-time of image mosaic, realize the multi-view conveyor belt surface fault online detection, and solve the problem of longitudinal tear of conveyor belt, we in this paper propose an adaptive multi-view image mosaic (AMIM) method based on the combination of grayscale and feature. Firstly, the overlapping region of two adjacent images is preliminarily estimated by establishing the overlapping region estimation model, and then the grayscale-based method is used to register the overlapping region. Secondly, the image of interest (IOI) detection algorithm is used to divide the IOI and the non-IOI. Thirdly, only for the IOI, the feature-based partition and block registration method is used to register the images more accurately, the overlapping region is adaptively segmented, the speeded up robust features (SURF) algorithm is used to extract the feature points, and the random sample consensus (RANSAC) algorithm is used to

 achieve accurate registration. Finally, the improved weighted smoothing algorithm is used to fuse  the two adjacent images. The experimental results showed that the registration rate reached 97.67%,

Citation: Gao, R.; Miao, C.; Li, X. and the average time of stitching was less than 500 ms. This method is accurate and fast, and is Adaptive Multi-View Image Mosaic suitable for conveyor belt surface fault online detection. Method for Conveyor Belt Surface Fault Online Detection. Appl. Sci. Keywords: image mosaic; conveyor belt surface fault; online detection; overlapping region 2021, 11, 2564. https://doi.org/ 10.3390/app11062564

Academic Editor: 1. Introduction Dariusz Frejlichowski Belt conveyor is a kind of continuous transportation equipment in modern production that has been widely used in the coal, mining, port, electric power, metallurgy, and chemical Received: 9 February 2021 industries, as well as in other fields [1]. The conveyor belt is the key component of the Accepted: 9 March 2021 Published: 12 March 2021 belt conveyor for traction and load-bearing, which can easily produce surface scratch, longitudinal tear, or fracture [2]. If these faults are not detected and handled in time, they may cause equipment damage, material loss, and other substantial economic losses, Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in and even cause personal casualties, affecting safety production [3]. At present, machine published maps and institutional affil- vision technology can be used for the online detection of conveyor belt surface fault [3–7]. iations. However, due to the structural characteristics of the conveyor, the distance between the upper conveyor belt and the lower conveyor belt is small. The sight distance is small, and the bandwidth and field of view are large. Single-view conveyor belt surface fault detection is difficult to meet the bandwidth requirements, and there are blind areas and unsatisfactory imaging effects. Therefore, multi-view detection is needed, and multi-view Copyright: © 2021 by the authors. conveyor belt images need to be stitched online. Licensee MDPI, Basel, Switzerland. This article is an open access article At present, there are three main image mosaic methods: the method based on gray distributed under the terms and information, the method based on frequency domain correlation, and the method based conditions of the Creative Commons on feature correlation. Among them, the method based on feature correlation has been Attribution (CC BY) license (https:// widely focused upon because of its good robustness. The more mature feature operators creativecommons.org/licenses/by/ are SIFT (scale-invariant feature transform), SURF (speeded up robust features), BRIEF 4.0/). (binary robust independent elementary features), ORB (oriented FAST and rotated BRIEF),

Appl. Sci. 2021, 11, 2564. https://doi.org/10.3390/app11062564 https://www.mdpi.com/journal/applsci Appl. Sci. 2021, 11, 2564 2 of 25

etc. SIFT algorithm was proposed by Lowe in 1999 and improved in 2004. It not only has good robustness in image rotation and scale scaling, but also can maintain good stability against noise interference, brightness, and viewing angle changes [8,9]. However, the algorithm does not consider the distribution of feature points. It will extract too many feature points in the region with intricate details, which can easily produce false matching. The algorithm is complex in its calculation and is time-consuming. Thus, it is difficult to meet real-time performance. For this reason, Li et al. [10] proposed an image mosaic algorithm based on the combination of region segmentation and SIFT, which improved the speed of the image. However, the amount of calculation is still large. Bay et al. [11] proposed SURF to overcome the shortcomings of the SIFT algorithm. The SURF algorithm inherits the scale and rotation invariance of the SIFT algorithm and adopts the Haar feature and the integral image concept, making the calculation more efficient. The speed is about three times faster than SIFT. However, the SURF algorithm also has the problem of wrong matching, which reduces the accuracy of the matching results and dramatically slows down the speed due to unstable feature points and wrong matching. Therefore, there are many studies on the performance improvement of the SURF algorithm. Jia et al. [12] constructed a k-dimensional (K-D) tree and improved the best bin first (BBF) algorithm instead of the linear algorithm to speed up SURF matching. Zhang et al. [13] proposed an improved SURF algorithm with a good application prospect in low light conditions. Zhang et al. [14] used Haar wavelet response to establish a descriptor for each feature point and calculated the normalized gray difference and two-step degree of the neighborhood to form a new feature descriptor. The SURF algorithm was improved to make it robust and stable to image blur, illumination difference, angle rotation, and field of view transformation. From overall technology development, image stitching technology has become more and more mature and stable. There are many excellent image stitching algorithms [15–22] that have achieved good results in some scenes. However, image stitching involves many research fields, and these stitching methods have many problems in dealing with special scenes such as conveyor belt surface fault detection. Due to the characteristics of surface material and image acquisition of the conveyor belt, it is not easy to extract and match the feature points of the conveyor belt image. Moreover, due to a large amount of data in a multi-view image, it is not easy to accurately and quickly stitch online, affecting fault detection and recognition. The current image mosaic method is not suitable for multi-view conveyor belt surface fault online detection. According to the particularity of image mosaic, the accurate and fast image mosaic method is studied. This paper presents an adaptive multi-view image mosaic method for conveyor belt surface fault online detection based on grayscale and feature combination. This method has a high registration rate and reduces the amount of stitching calculation and the time of the stitching process. It solves the problem of the difficulty of performing accurate and fast online stitching of the collected multi-view conveyor belt images, which affects fault detection and identification.

2. Multi-View Conveyor Belt Surface Fault Online Detection System The multi-view conveyor belt surface fault online detection system based on machine vision comprises line-scan charge-coupled device (CCD) network cameras, linear light sources, Ethernet switch, and computer. The schematic diagram of the system components is shown in Figure1. The device produces diffuse reflection light when the light is emitted by a high brightness linear light source and irradiates on the surface of the conveyor belt. The light intensity of the diffuse reflection light is related to the surface characteristics of the conveyor belt [4–6]. The multi-view line-scan CCD camera senses the diffuse reflection light through line scanning. Each scan takes a line of images perpendicular to running direction of the conveyor belt and transmits it to the computer through Ethernet. The computer processes Appl. Sci. 2021, 11, x FOR PEER REVIEW 3 of 26

tion of the conveyor belt and transmits it to the computer through Ethernet. The com- puter processes running image of the conveyor belt, analyzes and recognizes the con- veyor belt fault, and gives a fault alarm or shutdown control signal when a fault is found. Appl. Sci. 2021, 11, x FOR PEER REVIEW 3 of 26 Appl. Sci. 2021, 11, 2564 3 of 25

tion of the conveyor belt and transmits it to the computer through Ethernet. The com- Conveyor Belt puterrunning processes image ofrunning the conveyor image belt,of the analyzes conveyor and belt, recognizes analyzes the and conveyor recognizes belt fault, the con- and veyorgives abelt fault fault, alarm and or gives shutdown a fault controlalarm or signal shutdown when control a fault issignal found. when a fault is found.

Light Source 2 Computer

Conveyor Belt

Ethernet Ethernet Switch Light Source 2 Figure 1. Schematic diagram of multi-view conveyor belt surfaceComputer fault online detection system.

The structural characteristics of the conveyor limit the installation space of the camera. Generally, the camera can only be installed close to the conveyor belt. If the im- Ethernet age of the lower surface of Ethernet the conveyor belt is to beSwitch collected for fault detection, the camera can only be installed in the narrow space between the upper and lower belts, Figure 1. Schematic diagram of multi-view conveyor belt surface fault online detection system. Figureclose to 1. theSchematic lower diagramsurface ofof multi-view the conveyor conveyor belt. beltThe surface distance fault is online less than detection 40 cm, system. and the object distance is small, as shown in Figure 2. The width of the conveyor belt is generally 0.8–2.TheThe4 meters structural structural. Because characteristics characteristics the cross- ofsection of the the conveyor is conveyorarc and limit the limit the field installation the of viewinstallation is space large, of space it the is difficult camera. of the camera.Generally,to detect Generally, the the width camera theand cameracan accuracy only can beof only the installed conveyorbe installed close belt to close with the conveyorto a singlethe conveyor camera. belt. Ifbelt. It the is Ifnecessary image the im- of agetheto use lowerof themultiple surface lower cameras surface of the to conveyor of acquire the conveyor beltimages is to beltin be a ismulti collected to be-viewpoint collected for fault mannerfor detection, fault by detection, selecting the camera theap- cameracanpropriate only can be installation only installed be installed inposition the narrow in and the narrowspace installation between space postures be thetween upper to the realize and upper lower the and belts,conveyor lower close belts, belt to closethewithout lower to the blind surface lower area surface ofcoverage. the conveyor of the conveyor belt. The belt. distance The distance is less thanis less 40 than cm, 4 and0 cm, the and object the objectdistanceHowever, distance is small, istwo small, as adjacent shown as shown in images Figure in Figure have2. The overlapping 2. width The width of the regions of conveyor the conveyor that beltimage isbelt generallythe is samegenerally 0.8–con- 0.82.4veyor– m.2.4 belt Becausemeters content.. Because the cross-sectionDetermining the cross is-sectionthe arc two and is images thearc fieldand’ overlapping the of view field isof large, viewregions itis islarge, can difficult effectively it is todifficult detect re- totheduce detect width the the processing and width accuracy and range accuracy of theof image, conveyor of the reduce conveyor belt the with beltamount a singlewith of a camera. singlecalculation, camera. It is and necessary It isimprove necessary to usethe tomultiplestitching use multiple camerasspeed. cameras The to acquire geometric to acquire images shape images in aof multi-viewpoint the in aconveyor multi-viewpoint belt manner and manner bythe selecting influence by selecting appropriate of factors ap- propriateinstallationsuch as the installation parameters, position and position installation installation and position, installation postures and to realizeposturesposture the of to conveyorthe realize multi - theview belt conveyor without camera blindmake belt withoutareait difficult coverage. blind to determine area coverage. the exact overlapping region. However, two adjacent images have overlapping regions that image the same con- veyor belt content. Determining the two images’ overlapping regions can effectively re- duce the processingThe Upperrange of image, reduce the amount of calculation, and improve the stitching speed.Conveyor The geometric Belt shape of the conveyor belt and the influenceThe Upper of factors such as the parameters,Light Source installation position, and posture of the multi-Conveyorview camera Belt make it difficult to determine the exact overlapping region. Line-scan The Installation Camera Position The Upper The Lower The Lower Conveyor Belt The Upper Conveyor Belt Conveyor Belt Light Source Conveyor Belt

Line-scan (a) (b) The Installation Camera Position FigureFigure 2.2. InstallationInstallation positionposition ofof upperupper beltbelt imageimage acquisitionacquisition devicedevice onon thethe conveyor.conveyor. ((aa)) PicturePicture ofof thethe devicedevice installedinstalled inin the laboratory. (b) Picture of device installed on site. the laboratory. (b) Picture of device installed onThe site. Lower The Lower Conveyor Belt Conveyor Belt 3. Materials and Methods However, two adjacent images have overlapping regions that image the same conveyor belt(a) content.The adaptive Determining multi-view the twoimage images’ mosaic overlapping (AMIM) method regions(b) proposed can effectively in this reduce paper there- processingalizes multi range-view ofconveyor image, reducebelt surface the amount fault online of calculation, detection andimage improve stitching the by stitching the fol- Figure 2. Installation positionspeed.lowing of upper Thesub belt- geometricmethods: image acquisition grayscale shape of device- thebased conveyor on overlapping the conveyor. belt and region (a) the Picture influenceregistration of the device of, factors the installed division such in asof thethe the laboratory. (b) Picture ofparameters, device installed installation on site. position, and posture of the multi-view camera make it difficult to determine the exact overlapping region. 3. Materials and Methods 3. MaterialsThe adaptive and Methods multi-view image mosaic (AMIM) method proposed in this paper re- alizesThe multi adaptive-view conveyor multi-view belt imagesurface mosaic fault online (AMIM) detection method image proposed stitching in by this the paper fol- lowingrealizes sub multi-view-methods: conveyor grayscale belt-based surface overlapping fault online region detection registration image, the stitching division byof the Appl. Sci. 2021, 11, 2564 4 of 25

Appl. Sci. 2021, 11, x FOR PEER REVIEW 4 of 26

following sub-methods: grayscale-based overlapping region registration, the division of the IOIIOI (image (image of of interest), interest), feature-based feature-based adaptive adaptive partition partition and and block block registration, registration improved, improved weightedweighted smooth smooth fusion. fusion. The The flow flow chart chart of of the the AMIM AMIM method method is is shown shown in in Figure Figure3. 3.

Start

Grayscale-based overlapping region registration

Check the condition No of IOI

Yes

Feature-based adaptive partition and block registration

Improved weighted smooth fusion

End

FigureFigure 3. 3.Flow Flow chart chart of of the the adaptive adaptive multi-view multi-view image image mosaic mosaic (AMIM) (AMIM) method. method.

3.1.3.1. Grayscale-Based Grayscale-Based Overlapping Overlapping Region Region Registration Registration 3.1.1.3.1.1. Overlapping Overlapping Region Region Estimation Estimation TheThe overlapping overlapping region region estimation estimation model model of 2 adjacent of 2 adjacent images images is established is established according ac- tocording the geometry to the ofgeometry the conveyor of the belt conveyor and the parameters,belt and the installation parameters, position, installation and postureposition, ofand multi-view posture cameras.of multi- Forview a conveyorcameras. For belt, a theconveyor horizontal belt, distance the horizontal between distance its two endsbetween is setits as twoB, the ends speed is set is asV. B Suppose, the speed it runs is V smoothly. Suppose andit runs at a smoothly uniform speed. and at Thea uniform sensor pixelspeed. sizeThe of sensor the line-scan pixel size camera of the is w,line the-scan number camera of pixels is w, the is n , number the line of sampling pixels is frequency n, the line is f , and the lens focal length is f. For the conveyor belt, the world coordinate system samplingh frequency is fh, and the lens focal length is f. For the conveyor belt, the world Ow-XwYwZw is established. The Xw axis passes through the bottom end of the arc cross- coordinate system Ow-XwYwZw is established. The Xw axis passes through the bottom end section of the conveyor belt and is parallel to the ground, and the direction is from left of the arc cross-section of the conveyor belt and is parallel to the ground, and the direc- to right. The Zw axis passes through the leftmost endpoint of the arc cross-section of the tion is from left to right. The Zw axis passes through the leftmost endpoint of the arc conveyor belt and is perpendicular to the ground, and the direction is from bottom to cross-section of the conveyor belt and is perpendicular to the ground, and the direction is top. The intersection point Ow of the Xw axis and the Zw axis is the origin, and the origin from bottom to top. The intersection point Ow of the Xw axis and the Zw axis is the origin, coordinate is (0, 0, 0). The direction of V is the same as the Yw axis. Then, in the world and the origin coordinate is (0, 0, 0). The direction of V is the same as the Yw axis. Then, in coordinate system Ow-XwYwZw, the coordinate of the camera installation position Oc is the world coordinate system Ow-XwYwZw, the coordinate of the camera installation posi- (X0,Y0,Z0), the unit is millimeter; the camera installation posture is (α, β, γ), the unit is tion Oc is (X0, Y0, Z0), the unit is millimeter; the camera installation posture is (α, β, γ), the degrees (◦). In the line-scan camera imaging process, the space points are projected onto unit is degrees (°). In the line-scan camera imaging process, the space points are projected the image sensing pixels to form an image, and the image pixel coordinate system is O -uv. onto the image sensing pixels to form an image, and the image pixel coordinate systemf is The imaging schematic diagram of the line-scan camera is shown in Figure4. Of-uv. The imaging schematic diagram of the line-scan camera is shown in Figure 4. The static imaging of a line-scan camera can only obtain the one-dimensional image of the conveyorThe static belt. imaging Because of of a theline relative-scan camera motion can between only obtain the conveyor the one- beltdimensional and line-scan image camera,of the the conveyor line-scan belt. camera Because can of obtain the relative a two-dimensional motion between image the of the conveyor conveyor belt belt and byline dynamic-scan camera, imaging. the Suppose line-scan that camera the coordinatecan obtain ofa two any- pointdimensionalP on the image conveyor of the belt con- veyor belt by dynamic imaging. Suppose that the coordinate of any point P on the con- in the world coordinate system Ow-XwYwZw is (XP, YP, ZP), and the coordinate of the veyor belt in the world coordinate system Ow-XwYwZw is (XP, YP, ZP), and the coordinate point P imaged in the image pixel coordinate system Of-uv is (u, v). According to the of the point P imaged in the image pixel coordinate system Of-uv is (u, v). According to the fundamental relationship of the perspective projection geometric imaging model of Appl. Sci. 2021, 11, x FOR PEER REVIEW 5 of 26

Appl. Sci. 2021, 11, 2564 the imaging characteristics of the area-scan camera and the line-scan camera [23,24],5 ofone 25 can obtain the dynamic imaging model of the line-scan camera, is seen in Equation (1):

a()()() X  X  b Y  Y  c Z  Z u  2PPP 0 2 0 2 0 fundamental relationship of the perspective projection geometric imaging model of the  bV2 imaging characteristics of the area-scan camera and the line-scan camera [23,24], one can  c()() X X  a Z  Z obtain the dynamic imaging v model v   of v  the f  line-scan3PP camera,0 3 is seen 0 in Equation (1): (1) 0     a1()() ZPP Z 0 c 1 X X 0  a2(XP−X0)+b2(YP−Y0)+c2(ZP−Z0) us..()= t ZPP g X 0  b2V  ( − )− ( − )  = + − 0· c3 XP X0 a3 ZP Z0 (1) v v0 ∆v f a (Z −Z )−c (X −X )  1 P 0 1 P 0 0  where v is the pixel coordinates.t. Z valueP = g (ofX theP) main imaging point O of sensor pixel center of the line-scan camera, which can be generally set as v0 = n/2. f′ = f/w. V′ represents the relativewhere v 0motionis the pixelbetween coordinate the conveyor value of belt the and main the imaging line-scan point camera O of sensorin each pixel image center ac- 0 0 quisitionof the line-scan cycle, satisfying camera, whichV′ = V/ canfh. Compared be generally with set V′, as thev0 relative= n/2. motionf = f /w. of Vtherepresents conveyor beltthe relativecaused by motion vibration between and theoffset conveyor can be ignored. belt and The the line-scanresults show camera that in△v each is the image im- 0 0 agingacquisition distortion, cycle, and satisfying △v increasesV = V gradually/fh. Compared from v with0 to bothV , thesides relative. The Equation motion of is theas follows:conveyor belt caused by vibration and offset can be ignored. The results show that 4v is the imaging distortion, and 4v increases gradually from v to both sides. The Equation is v  k()()() v  v5  k v  v 3  k0 v  v 2 as follows: 1 0 2 0 3 0 (2) 5 3 2 ∆v = k1(v − v0) + k2(v − v0) + k3(v − v0) (2) where k1, k2, and k3 are distortion parameters, which can be obtained by camera calibra- tion.where k1, k2, and k3 are distortion parameters, which can be obtained by camera calibration.

B

Zw Conveyor Belt

Xw P(XP,YP,ZP) O(w 0,0,0)

Yw

World Coordinate System Z Field-of-View Optic Axis C

O(C X0,Y0,Z0)

YC Optical Lens

Of v Image Plane V p O CCD Linear u Array Sensor

FigureFigure 4. 4. ImagingImaging schematic schematic diagram diagram of of the the line line-scan-scan camera. camera.

InIn Equation Equation (1), (1), aa11, ,aa22, ,aa3,3 ,b1b,1 b, 2b, 2b,3b, 3c,1,c c12,, cc23, arec3 are the the elements elements of the of the rotation rotation matrix matrix of rigidof rigid body body transformation transformation between between the the world world coordinate system system O Oww-X-XwwYwZZww andand the lineline-scan-scan camera coordinatecoordinate system system O Oc-Xc-XcYcYcZcZcc.. TheThe rotationrotation matrix matrix is is an an orthogonal orthogonal matrix ma- trixrelated related to the to camerathe camera installation installation posture posture (α, β (α,, γ β,), asγ), shown as shown in Equation in Equation (3). (3).       −    a1 b1 c1 a1cos b 1β c 1cos  γ cos coscos β sin γ cos sinsin β sin       a2 b2 c2  =  sin α sinaβ bcos cγ −=cos sinα sinγ cossinα cossin β sinsin γ sin+ cos  sinα cos sinγ sin cos α cosβ  sin  cos (3) 2 2 2    (3) a3 b3 c3 cos αsin β cos γ + sin αsin γ cos α sin β sin γ − sin α cos γ cos α cos β   a3 b 3 c 3  cos sin cos + sin sin cos sin sin sin cos cos cos  In Equation (1), the constraint condition ZP = g(XP) is the equation of conveyor belt In Equation (1), the constraint condition ZP = g(XP) is the equation of conveyor belt arcarc cross cross-section-section in the world coordinate system by polynomial curve fitting.fitting. Let the range of the conveyor belt content corresponding to the overlapping region of 2 adjacent images in the Xw axis direction of the world coordinate system be [XP1,XP2]. The overlapping region of 2 adjacent images only considers the v axis coordinates of image pixels. Let the pixel range of the overlapping region of the left image be [v1, n]. Let the pixel Appl. Sci. 2021, 11, 2564 6 of 25

range of the overlapping region of the right image be [1, v2]. According to Equation (1), the overlapping region estimation model can be further established, which satisfies the relationship of Equation (4):

 0 c3(XP1−X01)−a3(ZP1−Z01) v1 = v0 + ∆vv=v − f ·  1 a1(ZP1−Z01)−c1(XP1−X01)   0 c3(XP2−X01)−a3(ZP2−Z01)  n = v0 + ∆vv=n − f ·  a1(ZP2−Z01)−c1(XP2−X01)  0 0 0 c3 (XP1−X02)−a3 (ZP1−Z02) 1 = v0 + ∆vv=1 − f · 0 0 (4)  a1 (ZP1−Z02)−c1 (XP1−X02)  0 0  0 c3 (XP2−X02)−a3 (ZP2−Z02)  v2 = v0 + ∆vv=v2 − f · 0 0  a1 (ZP2−Z02)−c1 (XP2−X02)  s.t. ZP1 = g(XP1) ZP2 = g(XP2)

where, X01, Y01, Z01 and X02, Y02, Z02 are the installation position coordinates of the cameras 0 0 0 0 corresponding to the 2 adjacent images, and a1, a3, c1, c3 and a1 , a3 , c1 , c3 are the elements of the rotation matrix of the cameras corresponding to the 2 adjacent images. When v is v1, n, 1, and v2, the values of ∆vv = v1, ∆vv = n, ∆vv = 1, and ∆vv = v2 are the values of ∆v, respectively. The overlapping region of 2 adjacent images can be estimated by solving the values of unknown variables v1, v2,Xw1, and Xw2.

3.1.2. Overlapping Region Registration Because the range of overlapping region obtained by the overlapping region estimation model is a rough estimation, it is necessary to carry out overlapping region registration. We used the grayscale-based method with gray correlation analysis to measure the correlation between the reference pixel and the comparison pixel in the local area of the image. We can find the position with a higher gray correlation degree in the estimated overlapping region and realize the overlapping region registration. There are 2 adjacent m × n size images—the gray value of the left image pixel is f 1(u,v), and the gray value of the right image pixel is f 2(u,v), where u is the row coordinate, u∈[1,m], v is the column coordinate, v∈[1,n]. The overlapping region range of the left and right images estimated is [v1,n] and [1,v2]. Registration of overlapping regions can be divided into the following steps: Step 1: Select the region image whose column coordinates are v∈[v1−L,n] in the left image, and select the region image whose column coordinates are v∈[1,v2 + L] in the right image, where L is the deviation value. Step 2: Calculate the mean of pixels of each column for the 2 regional images obtained. The Equation is as follows:

 m  img1(v) = ∑ f1(u, v)/m  u=1 m (5)   img2(v) = ∑ f2(u, v)/m u=1

where img1(v) and img2(v) are the mean of pixels in column v. Step 3: Affected by uneven illumination, weak imaging dark regions can appear at the edge of the image. To overcome its influence on stitching, for the left image, starting from v = n, one must look for the first one that satisfies img1(v) ≥ Td column by column in a descending manner and denote it as d1, d1 ≥ v1. For the right image, starting from v = 1, one must look for the first column that satisfies img2[v] ≥ Td column by column in an increasing manner and denote it d2, d2 ≤ v2; Td is the threshold. For the columns of the left image v∈[d1 + 1,n] and the right image v∈[1,d2−1], the gray correlation degree calculation is not performed, and the corresponding gray correlation degree εl or εr is assigned the value of zero. Appl. Sci. 2021, 11, 2564 7 of 25

Step 4: The reference sequence Xl0, Xr0 and comparison sequence Xl1, Xr1 were determined.  X = {X (1), X (2), X (3),..., X (k)}  l0 l0 l0 l0 l0   = {img2(d2), img2(d2 + 1), img2(d2 + 2),..., img2(d2 + k − 1)} (6) X = {X ( ) X ( ) X ( ) X (k)}  r0 r0 1 , r0 2 , r0 3 ,..., r0   = {img1(d1 − k + 1),..., img1(d1 − 2), img1(d1 − 1), img1(d1)}

where k is an odd number. For 2 images, take the current pixel img1(p) and img2(q) of the column mean as the center, and take k consecutive data to form a group of comparison sequences, where p∈[v1−L, v1 + L], q∈[v2−L, v2 + L]. The k consecutive data centered on img1(p) and img2(q) are divided into a group of comparison sequences as follows:   Xl1 = {Xl1(1), Xl1(2), Xl1(3),..., Xl1(k)}   = {img (p − (k − 1)/2),..., img (p − 2), img (p − 1), img (p),  1 1 1 1   img1(p + 1), img1(p + 2),..., img1(p + (k − 1)/2)} (7)  Xr1 = {Xr1(1), Xr1(2), Xr1(3),..., Xr1(k)}   = − − − −  {img2(q (k 1)/2),..., img2(q 2), img2(q 1), img2(q),   img2(q + 1), img2(q + 2),..., img2(q + (k − 1)/2)}

Step 5: The average processing is used to dimensionless each group of sequence data.  X 0 = X /mean(X )  l0 l0 l0  0  Xr0 = Xr0/mean(Xr0) (8) X 0 = X mean(X )  l1 l1/ l1  0  Xr1 = Xr1/mean(Xr1)

where “mean” means to get the average value of the sequence data. Step 6: Calculate the difference sequence ∆l, ∆r, the maximum difference wl1, wr1, the minimum difference wl2, wr2.

( 0 0 ∆l =|Xl0 − Xl1 | 0 0 (9) ∆r =|Xr0 − Xr1 | ( wl1 = max(∆l) (10) wr1 = max(∆r)  w = min(∆ ) l2 l (11) wr2 = min(∆r) where “max” and “min” represent the maximum and minimum values in the sequence data, respectively. Step 7: Calculation of gray correlation coefficient δl(i) and δr(i).  ( ) = wl2+ρwl1  δl i ∆ (i)+ρw l l1 (12) wr2+ρwr1  δr(i) = ∆r(i)+ρwr1

where i = 1, 2, . . . , k, ρ = 0.5. Appl. Sci. 2021, 11, x FOR PEER REVIEW 8 of 26

 ww   i= ll21  l  ()iw   ll1 (12) ww  Appl. Sci. 2021, 11, 2564  i = rr21 8 of 25  r    rr()iw 1 where i = 1,2,…,k, ρ = 0.5. Step 8: Calculation of gray correlation degree  and  . Step 8: Calculation of gray correlation degree εl andl εr. r   1 k =k (i )  =ll1  ( )  εl kk∑i1δl i   i=1 (13) 1 k (13) =k (i )  =rr1  (i)  εr kk∑i1δr i=1 Step 9: p = p + 1. If p satisfies the range of p∈[v1−L,v1 + L], continue to calculate the Step 9: p = p + 1. If p satisfies the range of p∈[v1−L,v1 + L], continue to calculate the gray correlation degree εl with img1(p) as the current pixel according to step (4). q = q + 1. gray correlation degree εl with img1(p) as the current pixel according to step (4). q = q + 1. If If q satisfies the range of q∈[v2−L,v2 + L], continue to calculate the gray correlation degree q satisfies the range of q∈[v2−L,v2 + L], continue to calculate the gray correlation degree εr εr with img2(q) as the current pixel according to step (4). with img2(q) as the current pixel according to step (4). Step 10:10: Calculate thethe pp valuevalue correspondingcorresponding to to the the maximum maximum value value of ofεl εandl and record record it asit asp0 .p Calculate0. Calculate the theq valueq value corresponding corresponding to to the the maximum maximum value value of ofεr andεr and record record it as it qas0. Then,q0. Then the, rangethe range of overlapping of overlapping region region of the leftof the image left is image determined is determined as u∈[1,m ], asv ∈u∈[p[1,0,dm1], andv∈[p the0,d1], range and of the overlapping range of region overlapping of the right region image of the is determined right image as u is∈ [1, determinedm], v∈[d2,q 0 as], andu∈[1, them], overlapping v∈[d2,q0], and region the overlapping registration region is completed. registration is completed. As shownshown inin Figure Figure5, 5 regions, regionsR12 R12and andR 22R22are are the the overlapping overlapping regions regions after after registration, registra- regionstion, regionsR11 and R11 andR21 areR21 are the the non-overlapping non-overlapping regions, regions, and and regions regionsR 13R13and andR R2323are are thethe dark regions.regions.

Columns of image Columns of image 1 p0 d1 n 1 d2 q0 n 1 1

R11 R12 R13 R23 R22 R21

Rowsof image Rowsof image m m

Left Image f1(u,v) Right Image f2(u,v) Overlapping Region after Registration Figure 5.5. Determining the extent of the overlapping region.region.

Select 22 adjacentadjacent imagesimages inin thethe multi-viewmulti-view conveyorconveyor beltbelt images.images. TheThe imageimage sizesize isis 2048 × 2048,2048, as as shown shown in in Figure Figure 66aa,b.,b. When When L =L 8=00 800,, k = k353, = 353, Td =T 4d4 =(T 44d as ( Thalfd as of half the ofmean the meanof image of image pixels), pixels), carry out carry the out method the method in this in paper, this paper, and obtain and obtain the curves the curves of gray of corre- gray

correlationlation degree degree εl andεl εandr, asε shownr, as shown in Figure in Figure 6c,d.6 Itc,d. can It be can seen be seenthat the that value the value range range of the ofcorrelation the correlation degree degree is [0,1] is; the [0,1]; maximum the maximum value of value the correlation of the correlation value is value used isto usedregister to registerthe overlapping the overlapping region. region.The left The and left right and right image image registration registration overlapping overlapping regions regions are are[1269,1886] [1269,1886] and and [245,932], [245,932], respectively. respectively. The The registration registration of ofoverla overlappingpping region region is shown is shown in inFigure Figure 6e6,fe,f.. 3.2. The Division of The IOI It is difficult to detect, select, and match feature points in image mosaic of multi-view conveyor belt surface fault online detection, which makes it impossible to use the feature- based method for more accurate image registration. Generally, the useful feature points of the conveyor belt are mainly distributed in the regions such as stripes caused by moving friction, conveyor belt joints, defects, etc. This paper uses an IOI detection algorithm to divide the IOI and the non-IOI image to improve feature-based registration effectiveness. Appl. Sci. 2021, 11, 2564 9 of 25 Appl. Sci. 2021, 11, x FOR PEER REVIEW 10 of 26

(a) (b)

1 1

max 0.8 0.8 max

0.6 0.6

0.4 0.4

0.2 0.2

Gray correlation degreeGray value correlation degreeGray value 0 0 1000 1200 1400 1600 1800 2000 2200 200 400 600 800 1000 Columns of image Columns of image (c) (d)

200 200

400 400

600 600

800 800

1000 1000

1200 1200

Rows of image

Rows of image 1400 1400

1600 1600

1800 1800

2000 2000 500 1000 1500 2000 500 1000 1500 2000 Columns of image Columns of image (e) (f)

FigureFigure 6. Registration 6. Registration of overlapping of overlapping regions regions based based on onthe thegrayscale. grayscale. (a) Left (a) Left image. image. (b) (Rightb) Right image image.. (c) Gray (c) Gray correlation correlation curve of thecurve left image of the. (d left) G image.ray correlation (d) Gray curve correlation of the curveright image of the. right(e) Registration image. (e) Registrationresults of the results overlapping of the overlappingregion of the region left image of . (f) Registrationthe left image. results (f )of Registration the overlapping results region of the of overlapping the right image region. of the right image.

ForStep the 1:i-th A imagebinarization of m × curven size is images generated collected for the from imageN viewpoint, partition of let interest. the gray value of image pixel be f (u,v), and the overlapping region range be u∈[1, m], v∈[va, v ]. The Select the HMi i(u) curve and the VDi(v) curve described in Equation (15) andb Equa- detection algorithm of the IOI is as follows: if the value of any i∈[1, N] satisfies E ≥ TE, tion (17). Construct a one-dimensional filter window with the size of Lf, and Lf isi an odd where TE is the threshold, then the N images to be stitched are considered to be the IOIs number. Slide the filter window on the HMi(u) curve and VDi (v) curve, respectively. The mean of Lf pixels is taken as the value of the center pixel of the window. The HM′i(u) Appl. Sci. 2021, 11, 2564 10 of 25

with apparent features; otherwise, they are the images with no apparent features. The image evaluation parameter Ei is calculated as follows: v u 2 u 1 m k vb E = k t (HM (u) − F ) + 2 VD (v) (14) i 1 m ∑ i i v − v ∑ i u=1 b b v=va

where k1 and k2 are proportional coefficients, which represent the contribution degree of factors and satisfy k1 + k2 = 1. Generally, k1 and k2 can be set to 0.5. HMi(u) is the mean vector of row pixels, and Fi is the mean of image pixels, calculated as follows:

1 vb HM (u) = f (u, v) u = 1, 2, . . . , m (15) i v − v ∑ i b a v=va

1 m Fi = ∑ HMi(u) (16) m u=1

VDi(v) is calculated as follows: v u 2 u 1 m VDi(v) = t ∑ ( fi(u, v) − VMi(v)) v = va, va + 1, . . . , vb (17) m u=1

where VMi (v) is the mean vector of column pixels, which is calculated as follows:

1 m VMi(v) = ∑ fi(u, v) v = 1, 2, . . . , n (18) m u=1

3.3. Feature-Based Adaptive Partition and Block Registration For the IOI, the feature-based adaptive partition and block registration method is further used to improve stitching accuracy. The IOI is segmented adaptively to improve and enhance the efficiency and real-time stitching. The specific operation steps are as follows: Step 1: A binarization curve is generated for the image partition of interest. Select the HMi(u) curve and the VDi(v) curve described in Equation (15) and Equa- tion (17). Construct a one-dimensional filter window with the size of Lf, and Lf is an odd number. Slide the filter window on the HMi(u) curve and VDi (v) curve, respectively. The 0 mean of Lf pixels is taken as the value of the center pixel of the window. The HM i(u) 0 curve and VD i(v) curve are obtained after the mean filtering. Then, perform binarization processing according to the following Equation to generate a binarization curve. ( 1, HM0 (u) < F ( ) = i i BHMi u 0 (19) 0, HM i(u) ≥ Fi ( 1, VD0 (v) > V ( ) = i i BVDi v 0 (20) 0, VD i(v) ≤ Vi

where Fi is the mean of HMi(u), as shown in Equation (16), and Vi is the mean of VDi(v), which satisfies: 1 vb V = VD (v) (21) i v − v ∑ i b b v=va Step 2: Deal with the region of interest (ROI) of the binarization curve. For BVDi(v) and BHMi(u) binarization curves, the region with a continuous value of “1” is found as the ROI of the curve, and the start position, end position, and length of the region are marked. The merging threshold TC of adjacent ROIs and the removal threshold TB of isolated small ROIs are set in order to solve the influence of scattered and small Appl. Sci. 2021, 11, 2564 11 of 25

regions with continuous “1” on the partition effect of ROIs. Suppose the interval between 2 adjacent ROIs of BVDi(v) and BHMi(u) binarization curve satisfies the condition of less than TC. In that case, the 2 ROIs are merged and connected. If the length of the region is less 0 than TB, the ROI is removed. After the processing, the binarization curves are BVDi (v) and 0 BHMi (u). Step 3: The ROI of IOI is generated and selected. A template image with the same size as the overlapping region of the IOI is constructed, and its pixel value is gi (u,v). ( 0 0 1, (u, v) ∈ ROI gi(u, v) = BHM (u) × BVD (v) = (22) i i 0, (u, v) ∈ background region

where u∈[1,m], v∈[va, vb]; through Equation (23), the overlapping region of the IOI to be stitched is divided into 1 background region and Ki rectangular ROIs with effective feature points distribution. 0 fi (u, v) = fi(u, v) × gi(u, v) (23)

If Ki ≥ 3, 2 ROIs with larger size are selected; otherwise, all ROIs are selected. Step 4: The image block is generated and selected. The ROI of the image is divided into Ni × Ni blocks, and the number of blocks is Bi. If Bi ≥ 3, 2 image blocks with larger standard deviation are selected; otherwise, all image blocks are selected. Appl. Sci. 2021, 11, x FOR PEER REVIEW 12 of 26 The flow chart of adaptive partition and block operation for the IOI is shown in Figure7.

Start

Merge the adjacent Select two ROIs with ROIs of binarization Select all ROIs Generate HMi(u) curve curve larger size and VDi(v) curve

Check the ROI No Divide the ROI into N × N blocks One-dimensional window removal condition mean filtering with size of Lf is carried out Yes Check the number of blocks No Carry out binarization Remove the ROI of ≥ processing binarization curve meets Bi 3 Yes Select two image Select all image blocks with larger Determine the ROI of the The ROI of the image is blocks curve partitioned standard deviation

Complete the adaptive partition and block operation Check the curve No Check the number No ROI merging of ROIs to meet condition Ki≥3 End Yes Yes

Figure 7. Flow chart of adaptive partition and block operation for the image of interest (IOI).(IOI).

We verifyverify thethe effect effect of of partition partition operation operation for for the the IOI. IOI. An IOIAn withIOI with a size a ofsize2048 of ×20482048 × 2048(assuming (assuming that that the wholethe whole image image is anis an overlapping overlapping region), region), as as shown shown in in FigureFigure8 a,8a, is is partitioned. The curves of HM1(u) and its mean, VD1(v) and its mean are shown in Figure 8c,d. Construct a one-dimensional window template of size 61 (Lf = 61) for mean filtering. The curves of HM′1(u) and its mean are shown in Figure 8e. The curves of VD′1(v) and its

mean are shown in Figure 8f.The binarization curves BHM1(u) and BVD1(v) are obtained as shown in Figure 8g,h, respectively. Suppose TC = 128 and TB = 128, the binarization curves

BHM1′(u) and BVD1′(v) obtained after merging and removing operations are shown in Fig- ures 8i,j, respectively. Finally, the ROI is partitioned, and the result is shown in Figure 8b. It can be seen from 8b that most of the defect information of the conveyor belt is con- tained in the ROI, which reduces the scope of image registration.

(a) (b) Appl. Sci. 2021, 11, x FOR PEER REVIEW 12 of 26

Start

Merge the adjacent Select two ROIs with ROIs of binarization Select all ROIs Generate HMi(u) curve curve larger size Appl. Sci. 2021, 11, 2564and VDi(v) curve 12 of 25

Check the ROI No Divide the ROI into N × N blocks One-dimensional window partitioned. The curvesremoval of conditionHM1(u) and its mean, VD1(v) and its mean are shown in mean filtering with size of Figure8c,d . Construct a one-dimensional window template of size 61 (Lf = 61) for mean Lf is carried out 0 filtering. The curves of HMYes1(u) and its mean are shown in Figure8e. The curves of 0 Check the VD 1(v) and its mean are shown in Figure8f.The binarization curves BHM1(u) and BVD1(v) number of blocks No Carry out binarizationare obtained as shownRemove in Figurethe ROI8 g,h,of respectively. Suppose TC = 128 and TB = 128, the binarization0 curve 0 meets Bi≥3 processing binarization curves BHM1 (u) and BVD1 (v) obtained after merging and removing operations are shown in Figure8i,j, respectively. Finally, the ROI is partitioned,Yes and the result is shown in Figure8b. It can be seen from 8b that most of the defect information of the conveyor belt Select two image is contained in the ROI, which reduces the scope of image registration. Select all image blocks with larger Determine the ROI of theWe further verifyThe ROI the of effect the image of adaptive is partition and block operation for theblocks IOI. The curve partitioned standard deviation original images are shown in Figure9a. Lf = 61, TC = 128, TB = 128 are used to process the original image and extract the ROI, as shown in Figure9b. The ROI image is selected and divided into 128 × 128 blocks. The selected ROI images are shown in Figure9(c1,c4,d1,e1). The result of 128 × 128 blocks for Figure9(d1) is shownComplete in Figure the 10 adaptive. The position partition of and the block operation Check the curveimage blockNo is representedCheck the bynumber (ri, ci ); ri andNo ci are the numbers of rows and columns of ROI mergingthe image block arrangedof ROIs according to meet to the matrix, respectively, satisfying ri ≤ Ri, ci ≤ Ci, condition Bi = Ri × Ci. The numberKi of≥3 blocks shown in Figure 10 is 5 × 4. Then, select the image End Yesblock with a larger standard deviation,Yes as shown in Figure9c–e, where Figure9(d2,d3) are the image blocks with the largest standard deviation (Block 1) and the second-largest standard deviation (Block 2), and the positions in Figure 10 are (5,3) and (4,3), respectively. Figure 7. Flow chart of adaptive partition and block operation for the image of interest (IOI). It can be seen that the separated image block contains a high amount of feature information, such as conveyor belt joint or surface fault, which is conducive to the extraction of useful features.We The verify results the of theeffect ROI of adaptive partition partition operation and block for the method IOI. areAn shown IOI with in Table a size1. of 2048 × 2048 (assuming that the whole image is an overlapping region), as shown in Figure 8a, is Tablepartitioned. 1. Results of theThe ROI curves adaptive of HM partition1(u) and blocking its mean, method. VD1(v) and its mean are shown in Figure 8c,d. Construct a one-dimensional window template of size 61 (Lf = 61) for mean filtering. ROI Block 1 Block 2 The curves of HM′1(u) and its mean are shown in Figure 8e. The curves of VD′1(v) and its Image Blocks Standard Standard Standard Number Size Position Position mean areNumber shown in FigureDeviation 8f.The binarizationDeviation curves BHM1(u) and BVDDeviation1(v) are obtained as 896 ×shown1152 in Figure 9 × 7 8g,h, 38.289respectively. (7,7) Suppose T 61.055C = 128 and T (7,6)B = 128, the 51.947binarization curves Figure9(a1) 2 256 ×B1152HM1′(u) and 9 × B2VD1′(v) obtained 28.983 after (1,5) merging and 35.055 removing (1,4) operations 32.219 are shown in Fig- Figure9(a2) 1 640 ×ure512s 8i,j, respectively. 5 × 4 20.741Finally, the ROI (5,3) is partitioned, 31.456 and the (4,3) result is shown 30.289 in Figure 8b. Figure9(a3) 1 512 × 512 4 × 4 32.263 (2,2) 39.426 (1,1) 32.247 It can be seen from 8b that most of the defect information of the conveyor belt is con- tained in the ROI, which reduces the scope of image registration.

(a) (b)

Figure 8. Cont. Appl.Appl. Sci. Sci.2021 20, 1121,, 2564 11, x FOR PEER REVIEW 13 of 25 13 of 26

250 250 H M ( u ) curve V D ( v ) curve 1 1 Mean curv e Mean curv e 200 200

150 150

100 100

Gray value Gray

50 50

Standarddeviation value

0 0 500 1000 1500 2000 500 1000 1500 2000 Rows of image Columns of image (c) (d)

250 250 H M ( u ) curve V D ( v ) curve 1 1 Mean curv e Mean curv e 200 200

150 150

100 100

Gray value Gray

50 50

Standarddeviation value

0 0 500 1000 1500 2000 500 1000 1500 2000 Rows of image Columns of image (e) (f)

2 2

1.5 1.5

1 1

0.5 0.5

Value after binarization Valueafter binarization Valueafter

0 0 500 1000 1500 2000 500 1000 1500 2000 Rows of image Columns of image (g) (h)

Figure 8. Cont. Appl. Sci. 2021, 11, x FOR PEER REVIEW 14 of 26

(j) 2 2

1.5 1.5

1 1

Appl.0.5 Sci. 2021, 11, x FOR PEER REVIEW 0.5 14 of 26 Appl. Sci. 2021, 11, 2564 14 of 25

Value after binarization Valueafter binarization Valueafter

0 0 500 1000 1500 2000 500 1000 1500 2000 (j) 2 Rows of image 2 Columns of image (i) (j)

Figure1.5 8. Partition operation for the IOI. (a) Image of interest (IOI).1.5 (b) Partition results. (c) HM1(u) curve. (d) VD1(v) curve. (e) HM1′(u) curve. (f) VD1′(v) curve. (g) BHM1 (u) curve. (h) BVD1 (v) curve. (i) BHM1′(u) curve. (j) BVD1′(v) curve.

1 We further verify the effect of adaptive1 partition and block operation for the IOI. The original images are shown in Figure 9a. Lf = 61, TC = 128, TB = 128 are used to process the original image and extract the ROI, as shown in Figure 9b. The ROI image is selected and 0.5 divided into 128 × 128 blocks. The selected0.5 ROI images are shown in Figure 9(c1,c4,d1,e1).

Value after binarization Valueafter The result of 128 × 128 blocks for Figurebinarization Valueafter 9(d1) is shown in Figure 10. The position of the image block is represented by (ri, ci); ri and ci are the numbers of rows and columns of the 0 image block arranged according to the 0matrix, respectively, satisfying ri ≤ Ri, ci ≤ Ci, Bi = Ri × 500 1000 1500 2000 500 1000 1500 2000 Ci. The number of blocks shown in Figure 10 is 5 × 4. Then, select the image block with a Rowslarger of image standard deviation, as shown in Figure 9c–e, whereColumns Figure of image 9(d2,d3) are the image blocks(i) with the largest standard deviation (Block 1) and the( secondj) -largest standard de-

viation (Block 2), and the positions in Figure 10 are (5,3) and (4,3), respectively. It can be Figure 8. Partition operation for the IOI. (a) Image of interest (IOI). (b) Partition results. (c) HM1(u) curve. (d) VD1(v) Figure 8. Partition operationseen for the that IOI. the (a )separated Image of interest image (IOI). block (b contains) Partition a results. high amount (c) HM1 (ofu) feature curve. (d information,) VD1(v) curve. such curve.0 (e) HM1′(u) curve.0 (f) VD1′(v) curve. (g) BHM (u) curve. (h) BVD (v) curve.0 (i) BHM ′(u) 0curve. (j) BVD ′(v) curve. (e) HM1 (u) curve. (f) VD1 (vas) curve.conveyor (g) B HM1belt (jointu) curve. or surface (h1) BVD1 fault,(v) curve. which (i) is1B HM1conducive(u) curve. to (thej) B1 VD1extraction(v) curve. of useful1 fea- tures. The results of the ROI adaptive partition and block method are shown in Table 1. We further verify the effect of adaptive partition and block operation for the IOI. The original images are shown in Figure 9a. Lf = 61, TC = 128, TB = 128 are used to process the original image and extract the ROI, as shown in Figure 9b. The ROI image is selected and divided into 128 × 128 blocks. The selected ROI images are shown in Figure 9(c1,c4,d1,e1). Longitudinal The result of 128 × 128 blocks for Figure 9(d1) is shown in Figure 10. The position of the Conveyor belt tear Hole join image block is represented by (ri, ci); ri and ci are the numbers of rowsbreakage and columns of the image block arranged according to the matrix, respectively, satisfying ri ≤ Ri, ci ≤ Ci, Bi = Ri × Ci. The number of blocks shown in Figure 10 is 5 × 4. Then, select the image block with a larger standard deviation, as shown in Figure 9c–e, where Figure 9(d2,d3) are the image (a1) blocks with the largest standard(a2) deviation (Block 1) and the(a3 second) -largest standard de- viation (Block 2), and( athe) positions in Figure 10 are (5,3) and (4,3), respectively. It can be seen that the separated image block contains a high amount of feature information, such as conveyor belt joint or surface fault, which is conducive to the extraction of useful fea- tures. The results of the ROI adaptiveROI partition and block method are shown in Table 1. ROI ROI

(b1) (b2) Longitudinal (b3) Conveyor belt (b) tear Hole join breakage Figure 9. Cont.

(a1) (a2) (a3) (a)

ROI ROI ROI

(b1) (b2) (b3) (b) Appl. Sci. 2021, 11, 2564 15 of 25 Appl. Sci. 2021, 11, x FOR PEER REVIEW 15 of 26

100 200 300 20 20 40 40 400 Block 2 500 60 60 Height 80

600 Height 80

Height 700 Block 1 100 100 800 120 120 20 40 60 80 100120 20 40 60 80 100120 200 400 600 800 1000 Width Width Width

(c1) (c2) (c3)

20 20 Block 2 Block 1 40 40 60 60 50 80 80

Height 100 Height 150 100 100 200 Height 250 120 120 200 400 600 800 1000 20 40 60 80 100120 20 40 60 80 100120 Width Width Width (c4) (c5) (c6) (c)

100 20 20 200 40 40 300 60 60 80 80

Height Height Height 400 Block 2 100 100 500 120 120 Block 1 20 40 60 80 100120 20 40 60 80 100120 600 Width Width 100 200 300 400 500 Width (d1) (d2) (d3) (d)

50 Block 2 100 150 20 20 40 40 200 Block 1 60 60 250 80 80

Height

Height

Height 300 100 100 350 120 120 400 20 40 60 80 100120 20 40 60 80 100120 450 Width Width 500 100 200 300 400 500 Width (e1) (e2) (e3) (e)

FigureFigure 9. Adaptive 9. Adaptive partition partition and and block block operation. operation. (a) Original(a) Original image. image. (a1) ( Conveyora1) Conveyor belt jointbelt joint image image (Image (Image 1). (a2 1).) Longi-(a2) Lon- gitudinal tear image (Image 2). (a3) Hole breakage image (Image 3). (b) Region of interest (ROI) partition results. (b1) ROI tudinal tear image (Image 2). (a3) Hole breakage image (Image 3). (b) Region of interest (ROI) partition results. (b1) ROI of of Image 1. (b2) ROI of Image 2. (b3) ROI of Image 3. (c) Adaptive partition and block operation for Image 1. (c1) Ex- Image 1. (b2) ROI of Image 2. (b3) ROI of Image 3. (c) Adaptive partition and block operation for Image 1. (c1) Extracted ROI tracted ROI image of Image 1. (c2) Block 1 of Figure 9(c1). (c3) Block 2 of Figure 9(c1). (c4) Extracted ROI image of Image 1. image(c5) of Block Image 1 of 1. (Figurec2) Block 9(c4) 1 of. (c6 Figure) Block9(c1). 2 of ( c3Figure) Block 9(c4) 2 of. ( Figured) Adaptive9(c1). ( partitionc4) Extracted and ROIblock image operation of Image for Image 1. (c5) 2. Block (d1) 1 Ex- of Figuretracted9 (c4).ROI (imagec6) Block of Image 2 of Figure 2. (d2)9 (c4).Block ( 1d )of Adaptive Figure 9(d1) partition. (d3) Block and block 2 of Figure operation 9(d1) for. ( Imagee) Adaptive 2. (d1 partition) Extracted and ROI block imageoperation of Image for 2. Image (d2) Block3. (e1 1) Extracted of Figure9 ROI(d1). image ( d3) Blockof Image 2 of 3. Figure (e2) Block9(d1). 1 ( ofe) Figure Adaptive 9(e1) partition. (e3) Block and 2 block of Figure operation 9(e1). for Image 3. (e1) Extracted ROI image of Image 3. (e2) Block 1 of Figure9(e1). ( e3) Block 2 of Figure9(e1). Appl. Sci. 2021, 11, 2564 16 of 25 Appl. Sci. 2021, 11, x FOR PEER REVIEW 16 of 26

20 20 20 20 40 40 40 40 60 60 60 60 80 80 80 Height 80

Height

Height

Height 100 100 100 100 120 120 120 120 20 40 60 80 100120 20 40 60 80 100120 20 40 60 80 100120 20 40 60 80 100120 Width Width Width Width (a) (b) (c) (d)

20 20 20 20 40 40 40 40 60 60 60 60 80 80 80 80

Height

Height

Height Height 100 100 100 100 120 120 120 120 20 40 60 80 100120 20 40 60 80 100120 20 40 60 80 100120 20 40 60 80 100120 Width Width Width Width (e) (f) (g) (h)

20 20 20 20 40 40 40 40 60 60 60 60 80 80 80 80

Height

Height

Height

Height 100 100 100 100 120 120 120 120 20 40 60 80 100120 20 40 60 80 100120 20 40 60 80 100120 20 40 60 80 100120 Width Width Width Width (i) (j) (k) (l)

20 20 20 20 40 40 40 40 60 60 60 60 80 80 80 80

Height

Height

Height

Height 100 100 100 100 120 120 120 120 20 40 60 80 100120 20 40 60 80 100120 Block 2 20 40 60 80 100120 20 40 60 80 100120 Width Width of Fig.9(d1) Width Width (m) (n) (o) (p)

20 20 20 20 40 40 40 40 60 60 60 60 80 80 80

Height 80 Height

Height

Height 100 100 100 100 120 120 120 120 20 40 60 80 100120 20 40 60 80 100120 Block 1 20 40 60 80 100120 20 40 60 80 100120 Width Width of Fig.9(d1) Width Width (q) (r) (s) (t)

FigureFigure 10. 10.The The result result of of 128 128× ×128 128 block block for for thethe ROIROI imageimage extractedextracted inin Figure9 9(d1).(d1). ((a)) (1,1). (1,1). ( (bb)) (1,2). (1,2). (c (c) )(1,3). (1,3). (d (d) )(1,4). (1,4). (e) (2,1). (f) (2,2). (g) (2,3). (h) (2,4). (i) (3,1). (j) (3,2). (k) (3,3). (l) (3,4). (m) (4,1). (n) (4,2). (o) (4,3). (p) (4,4). (q) (5,1). (r) (5,2). (s) (e) (2,1). (f) (2,2). (g) (2,3). (h) (2,4). (i) (3,1). (j) (3,2). (k) (3,3). (l) (3,4). (m) (4,1). (n) (4,2). (o) (4,3). (p) (4,4). (q) (5,1). (r) (5,2). (5,3). (t) (5,4). (s) (5,3). (t) (5,4).

To speed up the calculation, we use the SURF algorithm for feature extraction and description of the selected image block. The SURF algorithm includes the following opera- tions: building the Hessian matrix for feature point extraction, building scale space, feature point location, feature point main direction assignment, feature point descriptor generation, and feature point matching. The random sample consensus (RANSAC) algorithm is used to eliminate the outer points and retain the inner points for image registration. The trans- formation matrix of the image is constructed through the matching point pairs. According to the transformation matrix obtained by feature registration, the corresponding image is transformed. Feature point matching uses the distance function (Euclidean distance) to retrieve the similarity between high-dimensional vectors. In this paper, the K-D tree Appl. Sci. 2021, 11, 2564 17 of 25

algorithm is used. Firstly, the K-D tree is established for each image with the feature descriptor as the tree node information. Then, the K data closest to the corresponding feature points in the other image is searched using the feature points of 1 of the 2 images Appl. Sci. 2021, 11, x FOR PEER REVIEWto be stitched to complete the initial matching of feature points. The flow chart of18 theof 26

feature-based adaptive partition block registration method is shown in Figure 11.

Start

Perform adaptive partition and block operation for IOI

Achieve feature extraction and description of image blocks by using SURF method

Achieve feature registration by using RANSAC algorithm

Obtain homography transformation matrix

Image transform

End

FigureFigure 11. 11.Flow Flow chartchart of of feature-based feature-based adaptive adaptive partition partition block block registration registration method. method.

3.4.3.4. ImprovedImprovedWeighted Weighted Smooth Smooth Fusion Fusion TheThe improved improved weighted weighted smoothing smoothing algorithm algorithm is is used used for for image image fusionfusion to to get getthe the stitchedstitched image.image. The The expression expression of of the the improved improved weighted weighted smoothing smoothing algorithm algorithm is as is fol- as follows:lows:  f (u,fv() u, , v ),(u, v () u∈ , vR ) R  1  111 11  f (u, v) =fuv(,)=ω1(u,1v (,)(,) uvfuv) f1(u 1, v) + ω2( 2u (,)(,), uvfuv, v) f2 2(u, v), (u (,) uv, v) ∈ R 1212 ∪ RR 2222 (24)(24)    f2(u,f2v() u, , v ),(u, v () u∈ , vR ) 21 R 21

where R11 and R22 are the non-overlapping regions of 2 adjacent images; R12 is the over- where R11 and R22 are the non-overlapping regions of 2 adjacent images; R12 is the over- lapping region; and ω1 and ω2 are the weights of corresponding pixels in the overlapping lapping region; and ω1 and ω2 are the weights of corresponding pixels in the overlapping region, respectively, satisfying 0 < ω1, ω2 < 1 and ω1 + ω2 = 1. To ensure the continuity of region, respectively, satisfying 0 < ω1, ω2 < 1 and ω1 + ω2 = 1. To ensure the continuity of overlappingoverlapping regionregion transitiontransition andand effectivelyeffectively removeremove stitching stitching marks, marks, one one calculates calculates the the weightweight according according to to the the following following equation: equation: vp  (u , v )=v − p 0 ( u , v ) R (25) (u v1) = 0 (u v) ∈ R 12 ω1 , dp10 , 12 (25) d1 − p0 dv  (u , v )=d1 −1 v ( u , v ) R (26) ω2(u, v2) = dp (u, v) ∈ R22 22 (26) d1 −10p0

4. Experiments and Results The experiment is carried out on the platform, as shown in Figure 12, in order to verify the effectiveness of the method proposed in this paper. Two 2048 × 2048 IOIs with the same surface damage content were selected, as shown in Figure 13(a1,b1). Simulation by using MATLAB software programming in Intel i5 CPU computer showed that the method was effective. The ROI images are shown in Figure 13(a2,b2). The block images are shown in Figure 13(a3,a4,b3,b4). For the original image, the ROI image, and the block image, the SURF algorithm with the Hessian matrix determinant response value of 1000 Appl. Sci. 2021, 11, x FOR PEER REVIEW 19 of 26

Appl. Sci. 2021, 11, 2564 was used for feature extraction and description. Draw the circle on the image 18with of 25 the feature point as the center. The radius of the circle indicates the size of the feature point, and the straight line indicates the direction of the feature point. The feature points are 4. Experimentsdescribed as shown and Results in Figure 13(a5–a8,b5–b8). The comparison of image feature extraction is shown in Table 2. It can be seen that The experiment is carried out on the platform, as shown in Figure 12, in order to verify the number of feature points extracted in the ROI of Figure 13(a1,b1) were 65.52% and the effectiveness of the method proposed in this paper. Two 2048 × 2048 IOIs with the 69.71% of the whole image, respectively. The number of feature points extracted from the same surface damage content were selected, as shown in Figure 13(a1,b1). Simulation by usingROI MATLAB accounts softwarefor a high programming proportion of in the Intel number i5 CPU of computer feature points showed in that the thewhole method image, wasreflecting effective. that The the ROI ROI images is the are featureshown in point Figure concentration 13(a2,b2). The region block of images the whole are shown image. inHowever, Figure 13(a3,a4,b3,b4). it took 40.48% For and the 37.18% original of image,the total the image ROI image,time, respectively, and the block which image, was the less SURFtime algorithm-consuming. with Although the Hessian the matrix number determinant of feature response points value extracted of 1000 from was two used image for featureblocks extraction was less andthan description. the ROI, the Draw feature the points circle onextracted the image had with larger the feature feature values point as and thesignificant center. The features, radius ofwhich the circle met easy indicates registration the size requirements. of the feature Moreover, point, and the the two straight image lineblocks indicates took the less direction time. The of total the feature time was point. only The 16 feature ms, which points was are 2.77% described and 2.93% as shown of the inwhole Figure image13(a5–a8,b5–b8). time, respectively. It is thus suitable for fast registration.

Light Source

Computer Switch Line-scan Camera

Figure 12. Experimental platform.Figure 12. Experimental platform. The comparison of image feature extraction is shown in Table2. It can be seen that the number of feature points extracted in the ROI of Figure 13(a1,b1) were 65.52% and 69.71% of the whole image, respectively. The number of feature points extracted from the ROI accounts for a high proportion of the number of feature points in the whole image, reflecting that the ROI is the feature point concentration region of the whole image. However, it took 40.48% and 37.18% of the total image time, respectively, which was less time-consuming. Although the number of feature points extracted from two image blocks was less than the ROI, the feature points extracted had larger feature values and significant (a1) features, which met(a2 easy) registration requirements.(a3) Moreover, the( twoa4) image blocks took less time. The total time was only 16 ms, which was 2.77% and 2.93% of the whole image time, respectively. It is thus suitable for fast registration. For the two images of Figure 13(a1,b1), the different threshold values of Hessian matrix determinant response value, such as 500, 1000, 1500, and 2000, were used to extract the feature points of each image. The comparison in Table3 shows that with the increase of the threshold value, the number of feature points extracted and the time consumed was reduced. However, the threshold value is important for the block image, because of the

small image size. If the threshold value is too large, too few feature points are extracted (a5) (a6) (a7) (a8) in the block, which is not conducive to feature point registration. For example, when the (a) threshold value was 2000, the number of feature points shown in Figure 13(b7) was only one. The threshold value of the Hessian matrix determinant response value can be selected as 1000 by comparison. Appl. Sci. 2021, 11, x FOR PEER REVIEW 19 of 26

was used for feature extraction and description. Draw the circle on the image with the feature point as the center. The radius of the circle indicates the size of the feature point, and the straight line indicates the direction of the feature point. The feature points are described as shown in Figure 13(a5–a8,b5–b8). The comparison of image feature extraction is shown in Table 2. It can be seen that the number of feature points extracted in the ROI of Figure 13(a1,b1) were 65.52% and 69.71% of the whole image, respectively. The number of feature points extracted from the ROI accounts for a high proportion of the number of feature points in the whole image, reflecting that the ROI is the feature point concentration region of the whole image. However, it took 40.48% and 37.18% of the total image time, respectively, which was less time-consuming. Although the number of feature points extracted from two image blocks was less than the ROI, the feature points extracted had larger feature values and significant features, which met easy registration requirements. Moreover, the two image blocks took less time. The total time was only 16 ms, which was 2.77% and 2.93% of the whole image time, respectively. It is thus suitable for fast registration.

Light Source

Appl. Sci. 2021, 11, 2564 Computer Switch Line-scan Camera19 of 25

Figure 12. Experimental platform.

(a1) (a2) (a3) (a4)

Appl. Sci. 2021, 11, x FOR PEER REVIEW 20 of 26 (a5) (a6) (a7) (a8) (a)

(b1) (b2) (b3) (b4)

(b5) (b6) (b7) (b8) (b)

FigureFigure 13. Speeded 13. Speeded up robust up robust features features (SURF) (SURF) extraction extraction and and description. description. (a (a) )Image Image 1. 1. (a1 (a1)) Original Original image ofof ImageImage 1. 1. (a2) ROI of(a2 Image) ROI of1. Image(a3) Block 1. (a3 )1 Blockof Image 1 of Image 1. (a4 1.) Block (a4) Block 2 of 2Image of Image 1. ( 1.a5 ()a5 Feature) Feature points points of of Figure Figure 1313(a1).(a1). ( a6(a6) Feature) Feature points points of Figureof 13(a2). Figure 13(a7(a2).) Feature (a7) Feature points points of Figure of Figure 13(a3). 13(a3). (a8 ()a8 Feature) Feature points points of of Figure Figure 13(a4).13(a4). ((bb)) ImageImage 2. 2. (b1 (b1) Original) Original image image of Imageof 2. Image (b2) ROI 2. (b2 of) ROIImage of Image2. (b3) 2. Block (b3) Block1 of Image 1 of Image 2. (b4 2.) (Blockb4) Block 2 of 2 Image of Image 2. 2.(b5 (b5) Feature) Feature points points of of Figure Figure 1313(b1).(b1). (b6) Feature(b6 points) Feature of points Figure of 13(b2). Figure 13(b7(b2).) Feature (b7) Feature points points of Figure of Figure 13(b3). 13(b3). (b8 ()b8 Feature) Feature points points of of Figure Figure 13(b4).13(b4).

Table 2.For Comparison the feature of points image extracted feature extraction from the originalwhen the image, threshold ROI image,value of and Hessian block image,matrix de- terminantwe used response the RANSAC value algorithmis 1000. to complete the matching of feature points. The matching results are shown in Figure 14. The comparison of registration is shown in Table4. It can be seen thatFigure a sufficient Figure number Figure of registration Figu featurere Figure point pairsFigure can be obtainedFigure fromFigure Parameters block images.13(a5) The time13 of(a6) registration 13(a7) processing 13(a8)on an13 Intel(b5) i5 CPU13(b6) computer 13(b7) was 5 ms,13(b8) Quantitywhich was 41.67% of the total image time. 232 152 16 15 175 122 7 22 (piece) Time (ms) 578 234 8 8 546 203 8 8

For the two images of Figure 13(a1,b1), the different threshold values of Hessian matrix determinant response value, such as 500, 1000, 1500, and 2000, were used to ex- tract the feature points of each image. The comparison in Table 3 shows that with the in- crease of the threshold value, the number of feature points extracted and the time con- sumed was reduced. However, the threshold value is important for the block image, be- cause of the small image size. If the threshold value is too large, too few feature points are extracted in the block, which is not conducive to feature point registration. For example, when the threshold value was 2000, the number of feature points shown in Figure 13(b7) was only one. The threshold value of the Hessian matrix determinant response value can be selected as 1000 by comparison.

Table 3. Comparison of image feature extraction by different threshold values of Hessian matrix determinant response value.

Thres Parame- Figure Figure Figure Figure Figure Figure Figure Figure hold ters 13(a5) 13(a6) 13(a7) 13(a8) 13(b5) 13(b6) 13(b7) 13(b8) Quantity 694 373 27 35 647 334 29 39 500 (piece) Time (ms) 811 265 15 15 733 281 15 15 Quantity 232 152 16 15 175 122 7 22 1000 (piece) Time (ms) 578 234 8 8 546 203 8 8 Appl. Sci. 2021, 11, 2564 20 of 25

Table 2. Comparison of image feature extraction when the threshold value of Hessian matrix determinant response value is 1000.

Parameters Figure 13(a5) Figure 13(a6) Figure 13(a7) Figure 13(a8) Figure 13(b5) Figure 13(b6) Figure 13(b7) Figure 13(b8) Quantity (piece) 232 152 16 15 175 122 7 22 Time (ms) 578 234 8 8 546 203 8 8

Table 3. Comparison of image feature extraction by different threshold values of Hessian matrix determinant response value.

Threshold Parameters Figure 13(a5) Figure 13(a6) Figure 13(a7) Figure 13(a8) Figure 13(b5) Figure 13(b6) Figure 13(b7) Figure 13(b8) Quantity (piece) 694 373 27 35 647 334 29 39 500 Time (ms) 811 265 15 15 733 281 15 15 Quantity (piece) 232 152 16 15 175 122 7 22 1000 Time (ms) 578 234 8 8 546 203 8 8 Quantity (piece) 114 74 8 11 88 61 5 14 1500 Time (ms) 483 125 8 8 437 125 8 8 Quantity (piece) 64 44 3 10 48 28 1 9 2000 Time (ms) 468 125 8 8 406 125 8 8 Appl. Sci. 2021, 11, x FOR PEER REVIEW 21 of 26

Quantity 114 74 8 11 88 61 5 14 1500 (piece) Time (ms) 483 125 8 8 437 125 8 8 Quantity 64 44 3 10 48 28 1 9 2000 (piece) Time (ms) 468 125 8 8 406 125 8 8

For the feature points extracted from the original image, ROI image, and block im- age, we used the RANSAC algorithm to complete the matching of feature points. The matching results are shown in Figure 14. The comparison of registration is shown in Ta- ble 4. It can be seen that a sufficient number of registration feature point pairs can be obtained from block images. The time of registration processing on an Intel i5 CPU computer was 5 ms, which was 41.67% of the total image time. The proposed method was applied in the mine for conveyor belt surface fault online detection, as shown in Figure 15. To further verify the effectiveness of the AMIM method, we selected two groups of images collected by two line-scan cameras with different viewpoints. The image size was 2048 × 2048, as shown in Figure 16a,b. When L = 800, k = 353, Td = 44 (Td as half of the mean of image pixels), the overlapping regions of left and right images in Figure 16a were [1519,1788], [236,397], and the overlapping regions of left and right images in Figure 16(b) were [1526,1784], [257,546]. The parameters were set as TE = 20, Lf = 61, TC = 128, TB = 128. The evaluation parameters E of the IOI calculated from the image registration overlapping region in Figure 16a were 11.22 and 8.14, respectively. The evaluation parameters E were calculated by Equation (14). According to the detection algorithm of the IOI, the images in Figure 16a were identified as the non-IOI. For the overlapping region estimated from the image in Figure 16b, the evaluation parameters E of the IOI calculated were 29.06 and 36.97, respectively. The images in Figure 16b were identified as the IOI. Only for the Figure 16b, using 128 × 128 block size, was the fea- Appl. Sci. 2021, 11, 2564 ture-based adaptive partition block registration method carried out. Finally, the21 of im- 25 proved weighted smoothing algorithm was used for image fusion, and the stitched im- age was obtained, as shown in Figure 16c,d. The subjective evaluation of the stitched image shows that the stitching effect of the AMIM method is good.

(a) (b)

(c) (d)

Figure 14.14. RandomRandom samplesample consensus consensus (RANSAC) (RANSAC) matching matching of featureof feature points. points (a). Figure(a) Figure 13(a5) 13(a5) and Figureand Figure 13(b5) 13 registration.(b5) regis- (tration.b) Figure (b )13 Figure(a6) and 13(a6) Figure and 13Figure(b6) registration.13(b6) registration. (c) Figure (c) Figure 13(a7) 13 and(a7) Figure and Figure 13(b7) 13 registration.(b7) registration. (d) Figure (d) Figure 13(a8) 13 and(a8) and Figure 13(b8) registration. Figure 13(b8) registration.

Table 4. Comparison of the registration.

Number of Feature Registration Registration Time Image Points (Piece) Number (Pairs) (ms) Original Image 407 14 12 ROI Image 274 10 9 Block Image 60 4 5

The proposed method was applied in the mine for conveyor belt surface fault online detection, as shown in Figure 15. To further verify the effectiveness of the AMIM method, we selected two groups of images collected by two line-scan cameras with different view- points. The image size was 2048 × 2048, as shown in Figure 16a,b. When L = 800, k = 353, Td = 44 (Td as half of the mean of image pixels), the overlapping regions of left and right images in Figure 16a were [1519,1788], [236,397], and the overlapping regions of left and right images in Figure 16(b) were [1526,1784], [257,546]. The parameters were set as TE= 20, Lf = 61, TC = 128, TB = 128. The evaluation parameters E of the IOI calculated from the image registration overlapping region in Figure 16a were 11.22 and 8.14, respectively. The evaluation parameters E were calculated by Equation (14). According to the detection algorithm of the IOI, the images in Figure 16a were identified as the non-IOI. For the over- lapping region estimated from the image in Figure 16b, the evaluation parameters E of the IOI calculated were 29.06 and 36.97, respectively. The images in Figure 16b were identified as the IOI. Only for the Figure 16b, using 128 × 128 block size, was the feature-based adaptive partition block registration method carried out. Finally, the improved weighted smoothing algorithm was used for image fusion, and the stitched image was obtained, as shown in Figure 16c,d. The subjective evaluation of the stitched image shows that the stitching effect of the AMIM method is good. Appl. Sci. 2021, 11, x FOR PEER REVIEW 22 of 26

Table 4. Comparison of the registration.

Number of Feature Points Registration Number Registration Time Image (Piece) (Pairs) (ms) Original 407 14 12 Appl. Sci. 2021, 11, x FOR PEER REVIEWImage 22 of 26 ROI Image 274 10 9 Block Image 60 4 5

Table 4. Comparison of the registration.

Number of Feature Points Registration Number RegistrationThe Upper Time Image (Piece) (Pairs) Conveyor(ms) Belt Original 407 14 12 Image Appl. Sci. 2021, 11, 2564 22 of 25 ROI Image 274 10 Light Source9 Block Image 60 4 5

The Upper Conveyor Belt The Installation Position Line-scan Camera

(a) (b) Figure 15. Application of the proposed research. (a) A mine site. (b) The installed device.Light Source

The AMIM method is evaluated from the registration rate and running time to ver- ify the effectiveness of our method more objectively. The registration rate measures the accuracy of the matched feature points.

The Installation Position c1 Line-scan Camera Z  1- (27) c (a) 2 (b) where Z is the registration rate, c1 is the number of mismatched feature pairs, and c2 is the Figure 15. Application of the proposed research. (a) A mine site. (b) The installed device. Figure 15. Applicationnumber of offeature the proposed pairs in research. the RANSAC (a) A mine algorithm. site. (b) The installed device. The AMIM method is evaluated from the registration rate and running time to ver- ify the effectiveness of our method more objectively. The registration rate measures the accuracy of the matched feature points. c Z  1- 1 (27) c2

where Z is the registration rate, c1 is the number of mismatched feature pairs, and c2 is the number of feature pairs in the RANSAC algorithm. (a1) (a2) (b1) (b2) (a) (b)

(a1) (a2) (b1) (b2) (a) (b) (c) (d)

Figure 16. Stitching effect of AMIM method. (a) Two view image group 1. (a1) Left image. (a2) Right image. (b) Two view image group 2. (b1) Left image. (b2) Right image. (c) Two view image group 1 mosaic image. (d) Two view image group 2 mosaic image.

The AMIM method is evaluated from the registration rate and running time to verify the effectiveness of our method more objectively. The registration rate measures the accuracy of the matched feature points. (c) (d) c Z = 1 − 1 (27) c2

where Z is the registration rate, c1 is the number of mismatched feature pairs, and c2 is the number of feature pairs in the RANSAC algorithm. The comparison between the AMIM method and other methods is shown in Table5. For the images in Figure 16a, SIFT and SURF method cannot get the correct registration feature points and cannot realize the accurate image stitching operation on the basis of features. However, the AMIM method does not use the feature-based registration method after identifying the image as the non-IOI. It uses the overlapping region of registration to stitch the image directly. For the image in Figure 16b, although the number of feature Appl. Sci. 2021, 11, 2564 23 of 25

points extracted by our method was small, the registration rate was high, reaching 97.67%. The average time of stitching two adjacent images was less than 500 ms. Thus, our method ensures image stitching accuracy, reduces the registration time, and improves image stitching efficiency. Therefore, our method has advantages in the process of multi-view conveyor belt image stitching.

Table 5. Comparison of our method with other methods.

Feature Points Registration Mismatches Registration Method Image Time (ms) Number Number Number Rate Figure 16a 1225 0 13 0% 5513 SIFT Figure 16b 984 96 5 95.04% 4892 Figure 16a 232 0 11 0% 1270 SURF Figure 16b 175 94 5 94.95% 1231 Figure 16a 31 - - - 278 AMIM Figure 16b 42 42 1 97.67% 328

5. Conclusions Correct operation of the conveyor belt is very important for production and staff safety. The multi-view conveyor belt surface fault online detection system based on machine vision can provide a sufficient guarantee. This paper proposed an adaptive multi-view image mosaic method based on the combination of grayscale and feature to improve the accuracy and real-time of image mosaic and realize the multi-view conveyor belt surface fault online detection. Compared with the traditional mosaic methods, the proposed method has the following improvements and advantages: The overlapping region of two adjacent images is preliminarily estimated by estab- lishing the overlapping region estimation model, and then the grayscale-based method is used to register the overlapping region. For the collected image, the IOI detection algorithm is used to divide it into the IOI and the non-IOI, and different registration methods are used. For the IOI, its overlapping region may contain the fault information of the conveyor belt. The feature-based partition and block registration method is used to register the im- ages more accurately, the overlapping region is adaptively segmented, the SURF algorithm is used to extract the feature points, and the random sample consensus RANSAC algorithm is used to achieve accurate registration. It is easy to realize the accurate measurement of the geometric shape of the fault, and it is conducive to the accurate and reliable detection and evaluation of the health status of the conveyor belt. For the non-IOI, its overlapping region almost does not contain the fault information of the conveyor belt, and the features are not obvious. It is easy to have the wrong registration by using the SIFT algorithm, SURF algorithm, and other feature-based registration methods, and the registration rate is even 0%. Thus, they cannot obtain the right result, and the process is time-consuming, which is not conducive to real-time online detection. Accurate registration is not performed on the adjacent non-IOIs, which improves the accuracy and real-time of stitching. The improved weighted smooth fusion algorithm is used to fuse the images to realize image stitching. The experimental results show that when the SURF algorithm is used to extract and describe the block image features, the threshold value of the Hessian matrix determinant response value is 1000, which is better compared with 500, 1500, and 2000. It has the advantage of extracting enough feature points and consuming less time. Enough feature points can be extracted from the block image, and the features are significant, which is easy to register accurately. Compared with the whole image registration, it takes less time. The results of the experimental analysis showed the usefulness of the proposed method. The stitching effect is better compared with the SIFT and SURF algorithms. Simultaneously, it can reduce the amount of stitching calculation and the time of the stitching process, Appl. Sci. 2021, 11, 2564 24 of 25

and the registration rate is high. The registration rate can reach 97.67%, and the average time of stitching two adjacent images is less than 500 ms. It is suitable for conveyor belt surface fault online detection. In addition, this method can also provide a reference for other application types of image mosaic. The proposed method was applied in the mine for conveyor belt surface fault online detection and can be extended to port, electric power, metallurgy, chemical industry, and other fields for conveyor belt surface fault online detection. In the future, we will study the methods of accurate and reliable detection and evaluation of conveyor belt health. In order to improve the detection performance, we can further use the hybrid imaging method combining thermal imaging [25] and visible light imaging to explore ways to realize the fusion of heterogeneous images and multi-view image mosaic at the same time. It is also valuable to explore the application of deep learning in conveyor belt fault detection [26]. Multi-view is widely used in machine vision-based detection and has played an important role. The proposed method in this paper can be used in industry, agriculture, medical, and other fields of fault detection using the multi-view method [27–30]. Compared with other improved image mosaic methods, the proposed method is more suitable for solving the problem of conveyor belt surface fault online detection. For example, the IOI detection algorithm is proposed on the basis of the characteristics of the conveyor belt image, and it may not be adaptable to other detection object images for different applications. However, for other applications, the proposed method in this paper has a reference value for image mosaic problems that need to take into account both accuracy and real-time.

Author Contributions: Conceptualization, R.G.; methodology, R.G.; validation, R.G. and X.L.; formal analysis, R.G.; resources, C.M.; data curation, X.L.; writing—original draft preparation, R.G.; writing— review and editing, X.L.; supervision, C.M.; project administration, R.G.; All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by the National Natural Foundation of China (grant no. 51504164), Key Scientific and Technological Support Projects of Tianjin Key R&D Program (grant no. 18YFZCGX00930), the Tianjin Natural Science Foundation (grant no. 17JCZDJC31600), and Tianjin Key Research and Development Program (grant no. 18YFJLCG00060). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Data from this study can be made available upon request to the corresponding author after executing appropriate data sharing agreement. Conflicts of Interest: The authors declare no conflict of interest.

References 1. Yao, Y.; Zhang, B. Influence of the Elastic Modulus of a Conveyor Belt on the Power Allocation of Multi-Drive Conveyors. PLoS ONE 2020, 15, e0235768. [CrossRef][PubMed] 2. Andrejiova, M.; Grincova, A.; Marasova, D. Failure Analysis of the Rubber-Textile Conveyor Belts Using Classification Models. Eng. Fail. Anal. 2019, 101, 407–417. [CrossRef] 3. Hou, C.; Qiao, T.; Zhang, H.; Pang, Y.; Xiong, X. Multispectral visual detection method for conveyor belt longitudinal tear. Measurement 2019, 143, 246–257. [CrossRef] 4. Wang, G.; Zhang, L.; Sun, H.; Zhu, C. Longitudinal Tear Detection of Conveyor Belt under Uneven Light Based on Haar-AdaBoost and Cascade Algorithm. Measurement 2021, 168, 108341. [CrossRef] 5. Li, J.; Miao, C. The conveyor belt longitudinal tear online detection based on improved SSR algorithm. Optik 2016, 127, 8002–8010. [CrossRef] 6. Yang, Y.; Miao, C.; Li, X.; Mei, X. Online conveyor belts inspection based on machine vision. Optik 2014, 125, 5803–5807. [CrossRef] 7. Yu, B.; Qiao, T.; Zhang, H.; Yan, G. Dual band infrared detection method based on mid-infrared and long infrared vision for conveyor belts longitudinal tear. Measurement 2018, 120, 140–149. [CrossRef] 8. Karami, E.; Prasad, S.; Shehata, M. Image matching using SIFT, SURF, BRIEF and ORB: Performance comparison for distorted images. In Proceedings of the 2015 Newfoundland Electrical and Computer Engineering Conference, St. John’s, NL, Canada, 5–6 November 2015. Appl. Sci. 2021, 11, 2564 25 of 25

9. Lowe, D.G. Distinctive image features from scale-invariant key points. Inter. J. Comp. Vis. 2004, 60, 91–110. [CrossRef] 10. Li, Y.; Li, G.; Gu, S. Image mosaic algorithm based on regional block and scale invariant feature transformation. Opt. Precis. Eng. 2016, 24, 1197–1205. 11. Bay, H.; Tuytelaars, T.; Gool, L. SURF: Speeded up Robust Features. Comput. Vis. Image Underst. 2008, 110, 346–359. [CrossRef] 12. Jia, X.; Wang, X.; Dong, Z. Image Matching Method Based on Improved SURF Algorithm. In Proceedings of the 2015 IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, 10 October 2015. [CrossRef] 13. Zhang, W.; Yang, X. Study on Low Illumination Simultaneous Polarization Image Registration Based on Improved SURF Algorithm. IOP Conf. Ser. Mater. Sci. Eng. 2017, 274, 012018. [CrossRef] 14. Zhang, T.; Zhao, R.; Chen, Z. Application of Migration Image Registration Algorithm Based on Improved SURF in Remote Sensing Image Mosaic. IEEE Access 2020, 99.[CrossRef] 15. Mikolajczyk, K.; Schmid, C. A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1615–1630. [CrossRef][PubMed] 16. Zhang, W.; Li, X.; Yu, J.; Kumar, M.; Mao, Y. Remote sensing image mosaic technology based on SURF algorithm in agriculture. Image Video Proc. 2018, 85.[CrossRef] 17. Yang, Z.; Shen, D.; Yap, P.T. Image mosaicking using SURF features of line segments. PLoS ONE 2017, 12, e0173627. [CrossRef] [PubMed] 18. Aktar, R.; Kharismawati, D.E.; Palaniappan, K.; Aliakbarpour, H.; Bunyak, F.; Stapleton, E.A.; Kazic, T. Robust mosaicking of maize fields from aerial imagery. Appl. Plant Sci. 2020, 8, e11387. [CrossRef][PubMed] 19. Angel, Y.; Turner, D.; Parkes, S.; Malbeteau, Y.; Lucieer, A.; McCabe, M.F. Automated Georectification and Mosaicking of UAV-Based Hyperspectral Imagery from Push-Broom Sensors. Remote Sens. 2019, 12, 34. [CrossRef] 20. Zhang, J.; Yin, X.; Luan, J.; Liu, T. An improved vehicle panoramic image generation algorithm. Multimed. Tools Appl. 2019, 78, 27663–27682. [CrossRef] 21. Li, H.; Luo, J.; Huang, C.; Yang, Y.; Xie, S. An Adaptive Image-stitching Algorithm for an Underwater Monitoring System. Int. J. Adv. Robot. Syst. 2014, 11, 263–270. [CrossRef] 22. Pedro, P.R.F.; Francisco, D.L.M.; Francisco, G.L.X.; Samuel, L.G.; Jose, C.S.; Francisco, N.C.F.; Rodrigo, G.F. New Analysis Method Application in Metallographic Images through the Construction of Mosaics Via Speeded Up Robust Features and Scale Invariant Feature Transform. Materials 2015, 8, 3864–3882. [CrossRef] 23. Hui, B.; Wen, G.; Zhao, Z.; Li, D. Line-scan camera calibration in close-range photogrammetry. Opt. Eng. 2012, 51, 3602. [CrossRef] 24. Hui, B.; Wen, G.; Zhang, P.; Li, D. A Novel Line Scan Camera Calibration Technique with an Auxiliary Frame Camera. Trans. Instrum. Meas. 2013, 62, 2567–2575. [CrossRef] 25. Glowacz, A. Fault Diagnosis of Electric Impact Drills Using Thermal Imaging. Measurement 2021, 171, 108815. [CrossRef] 26. Zhang, M.; Shi, H.; Li, X.; Zhang, Y.; Yu, Y.; Li, X.; Zhou, M. Deep learning-based damage detection of mining conveyor belt. Measurement 2021, 175, 109130. [CrossRef] 27. Zulkifley, M.A.; Abdani, S.R.; Zulkifley, N.H. Automated Bone Age Assessment with Image Registration Using Hand X-ray Images. Appl. Sci. 2020, 10, 7233. [CrossRef] 28. Li, D.L.; Prasad, M.; Liu, C.-L.; Lin, C.-T. Multi-View Vehicle Detection Based on Fusion Part Model With Active Learning. IEEE Trans. Intell. Transp. Syst. 2020.[CrossRef] 29. Wang, W.; Xin, B.; Deng, N.; Li, J. Objective Evaluation on Yarn Hairiness Detection Based on Multi-View Imaging and Processing Method. Measurement 2019, 148, 106905. [CrossRef] 30. Peng, S.; Jiang, Y.; Zhang, K.; Wu, C.; Ai, D.; Yang, J.; Wang, Y.; Huang, Y. Cooperative Three-View Imaging Optical Coherence Tomography for Intraoperative Vascular Evaluation. Appl. Sci. 2018, 8, 1551. [CrossRef]