applied sciences
Article Detection and Classification of Bearing Surface Defects Based on Machine Vision
Manhuai Lu 1,* and Chin-Ling Chen 2,3,4,*
1 College of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Zhongshan Institute, Zhongshan 528400, China 2 College of Computer and Information Engineering, Xiamen University of Technology, Xiamen 361024, China 3 School of Information Engineering, Changchun Sci-Tech University, Changchun 130600, China 4 Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan * Correspondence: [email protected] (M.L.); [email protected] (C.-L.C.)
Abstract: Surface defects on bearings can directly affect the service life and reduce the performance of equipment. At present, the detection of bearing surface defects is mostly done manually, which is labor-intensive and results in poor stability. To improve the inspection speed and the defect recogni- tion rate, we proposed a bearing surface defect detection and classification method using machine vision technology. The method makes two main contributions. It proposes a local multi-neural network (Lc-MNN) image segmentation algorithm with the wavelet transform as the classification feature. The precision segmentation of the defect image is accomplished in three steps: wavelet feature extraction, Lc-MNN region division, and Lc-MNN classification. It also proposes a feature selection algorithm (SCV) that makes comprehensive use of scalar feature selection, correlation analysis, and vector feature selection to first remove similar features through correlation analysis, further screen the results with a scalar feature selection algorithm, and finally select the classification features using a feature vector selection algorithm. Using 600 test samples with three types of defect Citation: Lu, M.; Chen, C.-L. in the experiment, an identification rate of 99.5% was achieved without the need for large-scale Detection and Classification of Bearing Surface Defects Based on calculation. The comparison tests indicated that the proposed method can achieve efficient feature Machine Vision. Appl. Sci. 2021, 1, selection and defect classification. 1825. https://doi.org/10.3390/ app11041825 Keywords: computer monitoring and production control; bearing surface inspection; feature selec- tion; defect classification; the use of artificial intelligence in industry Academic Editor: Albert Smalcerz
Received: 28 January 2021 Accepted: 15 February 2021 1. Introduction Published: 18 February 2021 1.1. Background Bearings are important components of mechanical equipment. They can convert the Publisher’s Note: MDPI stays neutral direct friction from parts in relative rotation into rolling friction or sliding friction of the with regard to jurisdictional claims in bearing, hence reducing the friction coefficient and ensuring the long-term stable operation published maps and institutional affil- of the machine. The bearing surface, the critical part in direct contact with the rotating iations. parts, has an important impact on the installation performance, usage performance, quality, and life of the bearing. If there are defects on the bearing surface, such as wear, cracks, bruises, pitting, scratches, or deformation, they can lead to machine vibration and noise, accelerate the oxidation and wear of the bearing [1], and even damage the machine. It is Copyright: © 2021 by the authors. thus necessary to inspect the surface of the bearing to prevent defective products from Licensee MDPI, Basel, Switzerland. entering the market. The assembly of bearings here and abroad has been fully automated This article is an open access article for the most part, but the inspection of the bearing surface before and after assembly is distributed under the terms and still based on the naked eye of the workers. Such an inspection method is labor-intensive, conditions of the Creative Commons Attribution (CC BY) license (https:// low-efficiency, high-cost, and easily affected by such factors as inspector qualification and creativecommons.org/licenses/by/ experience, visual resolution of the naked eye, and fatigue. A new detection method is, 4.0/). therefore, urgently needed to replace the traditional naked-eye detection method.
Appl. Sci. 2021, 1, 1825. https://doi.org/10.3390/app11041825 https://www.mdpi.com/journal/applsci Appl. Sci. 2021, 1, 1825 2 of 22
Machine vision inspection technology, with its high speed, a high degree of automa- tion, and non-destructive quality, has rapidly developed in recent years. Therefore, it is naturally advantageous to apply machine vision to the inspection of surface defects on bearings [2]. Under the backdrop of China’s Industry 4.0 aim to upgrade the country’s manufacturing industry, and the production needs of bearing manufacturing enterprises, now is the time to investigate bearing surface defect detection and to study inspection and classification systems based on machine vision. In-depth analysis of weak links, including the acquisition of high-definition images of the bearing surface, precision segmentation of defect areas, and selection of defect classification features, can provide efficient and auto- mated detection methods for detecting exterior defects on bearings, which are important for improving bearing manufacturing.
1.2. Current Status and Bottlenecks in the Detection and Classification of Bearing Surface Defects The bearing surface defect detection system based on machine vision is a complex system that touches upon many fields, including mechanical design, automatic control, computer applications, image acquisition, image processing, image analysis, image inter- pretation, and pattern recognition. At present, the mainstream visual inspection process for bearing surface defects generally consists of image preprocessing, region-of-interest (ROI) extraction, and pattern recognition. With the continuous development of visual inspection technology for bearing surface defects, research in this area has been successful. For ex- ample, Hemmati et al. [3] designed a new signal-processing algorithm and used acoustic emission technology to measure the size of bearing surface defects. Bastami et al. [4] used autoregressive models and envelope analysis to enhance the features when a rolling object enters and exits a defect region and used the duration between two flawed events to esti- mate the size of a defect. Sobie et al. [5] generated training data using information obtained from a high-resolution simulation of rolling bearing dynamics and applied the data to train machine learning algorithms for bearing defect detection. Zheng et al. [6] developed a set of metal surface visual inspection experimental equipment based on genetic algorithms and image morphology. Furthermore, Pernkopf et al. [7] proposed three image acquisition schemes suitable for metal surface defect detection. Phung et al. [8] studied the pitting that often occurs on rolling bearings, and chose the area of the defect region, elongation, thickness, roundness, and smoothness of the edge as classification characteristics and as inputs to a neural network to finally identify the type of defect. Kunakornvon et al. [9] investigated exterior defects generated in service, such as scratches, and focused on the analysis of geometric characteristics, the shape moment characteristics of binary images of surface defects, and the selection of neural network parameters in the classification of surface defects. Chen et al. [10] studied manufacturing defects such as striping on steel ball bearings, took integrated entropy as the criterion for whether defects exist on a steel ball bearing, and designed a linear classifier to classify defect types based on defect area, shape factor, aspect ratio, roundness similarity, rectangular similarity, and direction angle. Mikołajczyk, Nowicki et al. [11] used neural network classifier based on single classification to process the data of tool image, proposed a method to determine the tool wear rate based on image analysis, discussed the evaluation of errors, and implemented a special neural wear software in visual basic to analyze the wear position of cutting edge. This work creates a new way for us to use neural network to detect the surface defects of bearings. Van et al. [12] conducted a visual inspection study on the surface defects of the anti-friction coating on a sliding bearing working surface and classified the defects using 12-dimensional features including 5-dimensional geometry features, 3-dimensional shape features, and 4-dimensional texture features. Mikołajczyk, Nowicki, et al. [13] proposed a two-step method for automatic tool life prediction in the turning process. The development of image-recognition software and an artificial neural network model can be used as a use- ful industrial tool for low-cost tool life estimation in a turning operation. This conclusion makes us more confident to carry out our work. Appl. Sci. 2021, 1, 1825 3 of 22
To summarize, the image segmentation methods used in existing bearing surface defect detection systems are mostly based on traditional methods of threshold segmentation and edge detection or the improved versions of the algorithms. Such algorithms, in their pursuit of inspection speed, do not have high precision in extracting the defect region. Defects on the bearing surface are either micro-defects or defects of small size, so the aforementioned methods cannot meet the precision requirement. However, feature selection in current machine-vision-based systems for bearing surface defect detection and classification methods mostly rely on the designer’s experience or intuition and lack quantitative requirements for the selection and extraction of classification features. As a result, the selected features are heavily influenced by environmental changes, and the performance of the classifier cannot be scaled. This indicates that more research effort is needed for the image segmentation algorithm and feature selection algorithm in the area of bearing defect detection and identification, and an important consideration in defect classification is the selection of an appropriate classifier. We thus focused this research on image segmentation, feature selection, and classification algorithms.
1.3. Main Contents of Research The main effort of this work is related to defect detection and the design of classification algorithms, including the image segmentation algorithm and the feature selection algorithm. The proper selection of a classifier is also important for defect classification, so we focused this work on image segmentation, feature selection, and classification algorithms. (1) To improve the image segmentation accuracy of the machine vision system, we investigated a local multi-neural network image segmentation algorithm with the wavelet transform as the classification feature. After combining all the target pixels, some post- processing was performed, and the segmentation result was obtained. (2) We targeted the weak link of classification feature selection in existing machine- vision-based bearing surface defect identification, and studied how to effectively imple- ment feature selection, improve the accuracy of defect classification, and avoid large-scale calculations.
2. Related Work In the process of detecting and classifying bearing surface defects based on machine vision, image segmentation and feature selection have a great effect on the accuracy and efficiency of defect detection. Additionally, choosing an appropriate classifier is also important in defect classification. Previous researchers have done extensive work on defect image segmentation, defect feature selection and classification identification in different fields. Their work is referenced in the development of this work.
2.1. Defect Image Segmentation The accurate extraction of defects on the bearing surface relies mainly on image segmentation. The image segmentation method can be divided into edge segmentation, threshold segmentation, regional growth segmentation, and image segmentation based on a specific theory [14–17]. In edge segmentation, the edges are detected with the aid of first-order or second-order derivatives and then fitted to form a contour of the region before the image is segmented [18–20]. In threshold segmentation, an appropriate set of thresholds is obtained so that pixels of the image can be compared with the thresholds to group pixels with similar features in one category. The key is to choose a suitable set of thresholds based on the characteristics of the image [21–23]. Regional growth segmentation is a serial segmentation algorithm. We find one or more seed pixels, design an index for measuring the similarity between other pixels and the seed pixels, and then, based on this quantitative metric, determine whether to include a peripheral pixel into the area where the seed pixels are located. This iteration continues until there are no matching pixels. The result is the segmented region of the regional growth segmentation algorithm [24–26]. Appl. Sci. 2021, 1, 1825 4 of 22
Segmentation methods based on specific theories include watershed segmentation methods based on mathematical morphology and segmentation methods based on fuzzy theory, neural networks, graph theory, and models. This type of algorithm [27–30] involves complex calculations, and the segmentation effect varies from image to image. For online inspection of surface defects on bearings, it is difficult to process images in real-time. By combining the local binary fitting (LBF), energy function, and the modified Lapalcian- of-Gaussian (MLoG) approach, Biswas et al. [31] proposed an active contour model to reduce the sensitivity of the initial contour. Similarly, Tarkhaneh et al. [32] made a trade- off between the accuracy and efficiency of image segmentation using a new adaptive method and a new mutation strategy. Abd Elaziz et al. [33] proposed a group selection method for multi-threshold image segmentation. By selecting an appropriate number of group algorithms from 11 algorithms, they provided expert systems with problem-solving tools. Alroobaea et al. [34], by introducing a priori information into the model parameters and taking into account the uncertainty of model parameters, solved problems related to accurate data classification during image segmentation, such as the effective estimation of model parameters and the selection of optimal model complexity. Narisetti et al. [35] combined the adaptive threshold and morphological filtering to achieve a semi-automatic root image segmentation with an average dice coefficient of 0.82 and a Pearson coefficient of 0.8. Although some researchers have paid attention to the accuracy of defect segmentation and the problem of defect feature extraction and classification, during the inspection of bearing surface defects the defect area needs to be accurately segmented in real-time, and randomness and complexity coexist in collected images. As a result, a literature search has thus far failed to identify a general segmentation method with both high accuracy and efficiency. Given this, our study applies to the accurate segmentation of defect regions in bearing surface defect images and has moderate complexity.
2.2. Defect Feature Selection and Classification To classify bearing surface defects, one must first extract the features of the gray-scale image of the segmented defect, and then perform appropriate dimensionality reduction according to the dimension of the extracted defect features. A feature data set suitable for the pattern recognition classifier is then formed, and the defect is finally classified using the pattern recognition classifier. Many investigators have extracted features of research targets by combining geomet- ric features, gray-scale features, texture features, and projection features. For example, Zhang et al. [36] extracted a total of 25 features of wood defect images, including geometry, region, and texture, and moment invariants, to achieve the detection of wood defects. Lu M. et al. [37] propose a bearing defect classification network based on an autoencoder to enhance the efficiency and accuracy of bearing defect detection. Yan et al. proposed a support vector machine recursive feature elimination (SVM-RFE) method that is capable of lowering the overfitting probability and improving feature selection efficiency [38–41] by fully utilizing the information in the training set. Similarly, Zhao and Jia [42] proposed a curvelet-transform-based global and local embedded algorithm for nonlinear feature extraction of weldment defects. Its classification accuracy was shown to be better than that of the principal component analysis (PCA) and locally linear embedding (LLE) algorithms. Yildiz et al. [43] extracted defect images on the surface of woven fabric based on the tex- tures of the gray-scale co-occurrence matrix and used the K-nearest-neighbor algorithm to classify the defect images. Further, Dubey et al. [44] extracted the spatial and color features of the color images of fruit defects and used the K-means clustering algorithm to detect and classify fruit defects. Mu et al. [45] used PCA to extract the main components of weld defect images and used support vector machines to detect and classify weld defects. In summary, our search on feature extraction and classification and recognition meth- ods revealed that different feature extraction and classification and recognition methods all have their advantages and disadvantages, and the classification performance of the pattern Appl. Sci. 2021, 11, x FOR PEER REVIEW 5 of 21
extracted the spatial and color features of the color images of fruit defects and used the K- means clustering algorithm to detect and classify fruit defects. Mu et al. [45] used PCA to extract the main components of weld defect images and used support vector machines to detect and classify weld defects. In summary, our search on feature extraction and classification and recognition methods revealed that different feature extraction and classification and recognition methods all have their advantages and disadvantages, and the classification performance of the pattern recognition classifier that researchers pay attention to depends on the feature extraction algorithm, the classification algorithm, and the number of samples. Thus, there is no general feature extraction and classification algorithm for different types Appl. Sci. 2021of ,defect1, 1825 data. Therefore, this research is suitable for the feature extraction and5 of 22 classification algorithm for online detection of bearing surface defects. Based on specific characteristics of the defects, the number of extracted defect features, and the number of samples, and consideringrecognition the classifier actual detection that researchers cost, pay the attention goal of tothis depends study on was the to feature realize extraction an algorithm, the classification algorithm, and the number of samples. Thus, there is no efficient feature generalextraction feature and extraction classification and classification algorithm algorithm for foronline different detection types of defect and data. classification. Therefore, this research is suitable for the feature extraction and classification algorithm for online detection of bearing surface defects. Based on specific characteristics of the defects, 3. Materials and Methodsthe number of extracted defect features, and the number of samples, and considering the actual detection cost, the goal of this study was to realize an efficient feature extraction and The bearing classificationsurface defect algorithm detection for online and detection classification and classification. algorithm comprise the following five steps, as shown in Figure 1: 3. Materials and Methods Step 1: Perform image Thepreprocessing, bearing surface filtering, defect detection and correction. and classification algorithm comprise the follow- Step 2: Perform defecting five detection, steps, as shownposition in Figure the inspection1: object, and detect whether it has defects. Step 1: Perform image preprocessing, filtering, and correction. Step 3: Perform fineStep segmentation 2: Perform defect of the detection, defect area. position If a the defect inspection is detected, object, and the detectdefect whether area it will be precisely segmented,has defects. and the detailed features of the defect area will be Step 3: Perform fine segmentation of the defect area. If a defect is detected, the defect area retained to the greatestwill be extent precisely possible. segmented, and the detailed features of the defect area will be Step 4: Perform feature retainedselection. to the According greatest extent to possible.the sample data and dimensionality reduction goals,Step 4: usePerform the featuredesigned selection. feature According selection to thealgorithm sample data to andselect dimensionality a feature re- combination withduction good classification goals, use the designedperformance feature from selection the feature algorithm pool. to select a feature combination with good classification performance from the feature pool. Step 5: Perform defect classification. Based on the selected features, use pattern Step 5: Perform defect classification. Based on the selected features, use pattern recognition recognition technologytechnology to identify to identify the the type type of of defect.
Figure 1. Flow chart of the algorithm. Figure 1. Flow chart of the algorithm.
3.1. Image Preprocessing3.1. Image Preprocessing The basic principle of this step of the algorithm is to read the image to be inspected, first filtering the image to make the edges of the image smoother and more continuous, then performing edge detection on the image to detect the upper edge of the bearing, and finally adjusting the image according to the y-coordinate difference in edge pixel coordinates. The Appl. Sci. 2021, 11, x FOR PEER REVIEW 6 of 21
The basic principle of this step of the algorithm is to read the image to be inspected, Appl. Sci. 2021, 1, 1825 first filtering the image to make the edges of the image smoother and more continuous,6 of 22 then performing edge detection on the image to detect the upper edge of the bearing, and finally adjusting the image according to the y-coordinate difference in edge pixel coordinates.corresponding The column corresponding is adjusted column to position is adjusted the edgeto position line at the the edge same line y coordinateat the sameto y coordinatecorrect the distortion.to correct the The distortion. goal of the The image goal correction of the image is to correctcorrection the is distortion to correct of the distortionimage collected of the by image the line collected array camera by the to restore line array the original camera characteristics to restore ofthe the original image. characteristicsThe specific algorithm of the image. steps The are asspecific follows: algorithm steps are as follows: StepStep 1: PerformPerform a a3 3×× 3 median3 median value value filter filter on on the the distorted distorted image. image. StepStep 2: DetectDetect an an obvious obvious edge edge of of the the bearin bearingg ring ring with with the the Canny Canny op operatorerator to to serve as a referencereference line. line. StepStep 3: UsingUsing the the starting starting point point of of the the referenc referencee line line as asthe the reference reference point, point, calculate calculate the the y- coordinatey-coordinate difference difference between between each each pixel pixel point point of the of edge the edgeand the and reference the reference point inpoint the y-direction. in the y-direction. StepStep 4: KeepingKeeping the the reference reference point point unchanged, unchanged, cyclically cyclically move move the the remaining remaining columns columns of pixelsof pixels according according to the to magnitude the magnitude and sign and signof the of difference the difference to obtain to obtain a corrected a cor- image.rected image. For the images of the inner and outer sides of the bearing ring, a flute of the bearing For the images of the inner and outer sides of the bearing ring, a flute of the bearing may be chosen as the reference line, as lines marked by a star shown in Figure2a,c. Perform may be chosen as the reference line, as lines marked by a star shown in Figure 2a,c. the correction according to the above algorithm to obtain the corrected results, shown in Perform the correction according to the above algorithm to obtain the corrected results, Figure2b,d. shown in Figure 2b,d.
(a) (b)
(c) (d)
Figure 2. ComparisonComparison ofof side side images images before before and and after after correction: correction: (a) inner(a) inner side side image image before be correction;fore correction; (b) inner (b) sideinner image side imageafter correction; after correction; (c) outer (c) side outer image side beforeimage correction;before correction; (d) outer (d) side outer image side afterimage correction. after correction.
3.2.3.2. Defect Defect Detection Detection DefectDefect detection is is divided into two steps. Locate the position of the bearing image inin the the whole whole image image and and separate separate it it from from the the background background to to automatically automatically obtain obtain the the ROI. ROI. Then, use the defect detection algorithm to determine whether the bearing in the ROI is defective.defective.
3.2.1.3.2.1. Region-of-Interest Region-of-Interest (ROI) (ROI) Extraction Extraction TheThe bearing isis placedplaced flat flat on on the the turntable, turntable, the the linear linear array array camera camera is parallel is parallel to the to axisthe axisof the of bearing,the bearing, the turntable the turntable rotates rotates at a constant at a constant speed, speed, and the and linear the linear array array camera camera scans scansthe outer the outer ring of ring the of bearing the bearing synchronously. synchronously. The acquiredThe acquired bearing bearing image image is divided is divided into intothree three areas: areas: the bottom the bottom turntable turntable area, the area central, the bearing central area, bearing and thearea, upper and background the upper backgroundarea. As the threearea. As areas the are three distributed areas are along distributed the y-axis, along they the are y-axis, well they aligned are well in the alignedx-axis, inthe the gray x-axis, level the of gray the level black of background the black background in the upper in the part upper is low, part and is low, the grayand the level gray of levelthe turntable of the turntable image in image the bottom in the part bottom is high, part where is high, the featureswhere the are features more pronounced are more and relatively stable, and it is easy to detect them by using the feature region localization algorithm. The design algorithm goes as follows: Step 1: Perform a mean value filter on the image. Step 2: Convert the filtered image into a binary image. Appl. Sci. 2021, 11, x FOR PEER REVIEW 7 of 21
pronounced and relatively stable, and it is easy to detect them by using the feature region Appl. Sci. 2021, 1, 1825 localization algorithm. The design algorithm goes as follows: 7 of 22 Step 1: Perform a mean value filter on the image. Step 2: Convert the filtered image into a binary image. Step 3: BasedBased on on local local features features of of the the binary binary image, image, select select the the dark dark background background portion of thethe bearing bearing image. image. Step 4: UsingUsing the the smallest smallest circ circumscribedumscribed rectangle, rectangle, determine determine the the coordinate coordinate of the center ofof the the dark dark background. background. Step 5: BasedBased on on the the size size of of the the bearing bearing imag imagee at at the the center, center, establish establish a rectangular model forfor the the central central bearing bearing region. region. Step 6:6: UsingUsing the the position position information information of of th thee dark background region and its relative positionposition to to the the central central bearing bearing image, image, locate locate the the rectangular rectangular model of the bearing andand obtain obtain the the ROI ROI region. region. Figure3 shows the processing procedure on a ground waste sample according to the stepsFigure described 3 shows above. the processing procedure on a ground waste sample according to the steps described above.
(a) (b)
(c) (d)
(e)
Figure 3. Feature location region-of-interes region-of-interestt (ROI) extraction process: ( a) original image; ( b) filtered filtered image; ( c) region after conversion into binary; (d) dark background region;region; ((ee)) ROIROI extractionextraction result.result.
3.2.2. Defect Detection Algorithm 3.2.2. Defect Detection Algorithm The structural characteristics of the bearing determine the high level of gray-scale consistencyThe structural of the images characteristics along the of x the-axis, bearing but the determine appearance the of high defects level often of gray-scale disrupts thisconsistency consistency. of the Based images on along this, we the designedx-axis, but the the following appearance algorithm: of defects often disrupts this consistency. Based on this, we designed the following algorithm: Step 1: Perform a Gaussian filter on the image to eliminate the influence of noise, such as Step 1: Perform a Gaussian filter on the image to eliminate the influence of noise, such as fingerprints on the bearing surface, and generate a Gaussian-filtered image. fingerprints on the bearing surface, and generate a Gaussian-filtered image. Step 2: Calculate the average value of each row of pixels along the x-axis to generate an Step 2: Calculate the average value of each row of pixels along the x-axis to generate an average filtered image. average filtered image. Step 3: Form a difference value image using the difference between the Gaussian-filtered Step 3: Form a difference value image using the difference between the Gaussian-filtered image and the mean filtered image. image and the mean filtered image. Step 4: IdentifyIdentify the the defective defective region, region, which which is is the the region region in in the the difference difference image image where the amplitudeamplitude is is greater greater than than the the preset preset threshold. Figure 44 showsshows thethe detectiondetection resultsresults ofof imagesimages withwith aa certaincertain levellevel ofof noise.noise. Appl. Sci. 2021, 111,, 1825 x FOR PEER REVIEW 8 8of of 21 22
(a) (b)
(c) (d)
(e) (f)
Figure 4. Test results obtained using image detection algorithm for different defects: ( a)) original image image of of over-grinding; over-grinding; (b) detection result of over-grinding; ( c) original image of the scratch defect; ( d) detection results results of of the the scratch scratch defect; defect; ( (ee)) original image of the bruise defect; (f) detection resultsresults ofof thethe bruisebruise defect.defect.
3.3. Precision Segmentation of Defect Region 3.3. PrecisionGaussian Segmentation filter processing of Defect is Regionused in the defect detection process. The process eliminatesGaussian noise filter but processingalso blurs isthe used boundary in the defect of the detection defect, reduces process. the The accuracy process of elimi- the segmentationnates noise but of also the blurs defect the region, boundary and of can the defect,even affect reduces thethe classification accuracy of of the the segmenta- defect. Therefore,tion of the it defect is necessary region, andto perform can even accurate affect the segmentation classification of of the the region defect. obtained Therefore, above. it is necessaryTo address to perform this situation accurate based segmentation on previous of the research, region obtained we proposed above. a local multiple neuralTo network address (Lc-MNN) this situation image based segmentation on previous algorithm research, for we extracting proposed the a local features multiple with theneural wavelet network transform (Lc-MNN) to further image segmentationprocess the initial algorithm segmented for extracting images the to featuresimprove with the accuracythe wavelet of segmentation. transform to furtherThe application process theof the initial algorithm segmented was carried images out to based improve on the initiallyaccuracy segmented of segmentation. results. TheThe application algorithm comp of therises algorithm three steps: was carried feature out extraction based on with the theinitially wavelet segmented transform, results. region The algorithm division compriseswith Lc-MNN, three steps: and featureclassification extraction and with post- the processingwavelet transform, with Lc-MNN. region The division steps with are as Lc-MNN, follows. and classification and post-processing with Lc-MNN. The steps are as follows. Step 1: Wavelet transform feature extraction: perform a three-layer wavelet transform on Step 1: theWavelet original transform image to feature obtain extraction: a series of performhigh-frequency a three-layer and low-frequency wavelet transform images on the original image to obtain a series of high-frequency and low-frequency images stacked in a pyramid structure (as shown in Figure 5a). These images are expanded stacked in a pyramid structure (as shown in Figure5a). These images are expanded to the size of the original image by nearest-neighbor interpolation (as shown in to the size of the original image by nearest-neighbor interpolation (as shown in Figure 5b), together with the original image. Each pixel location is characterized Figure5b), together with the original image. Each pixel location is characterized by an 11-dimensional feature to form a feature vector for each pixel. by an 11-dimensional feature to form a feature vector for each pixel. Step 2: Lc-MNN region division: first, perform morphological processing of the image Step 2: Lc-MNN region division: first, perform morphological processing of the image using the boundary of the initial segmentation to form an overall to-be-classified using the boundary of the initial segmentation to form an overall to-be-classified region, a target sample region, and a background sample region centered on the region, a target sample region, and a background sample region centered on the boundary. Then, fit the boundary curve with an N-sided polygon and generate N boundary. Then, fit the boundary curve with an N-sided polygon and generate N symmetric rectangles centered on the N sides, with each rectangle representing a symmetric rectangles centered on the N sides, with each rectangle representing neural-network region. Finally, each rectangle is intersected with the overall to-be- a neural-network region. Finally, each rectangle is intersected with the overall classifiedto-be-classified region, region, the target the target sample sample region, region, and andthe background the background sample sample region region to obtainto obtain the theto-be-classified to-be-classified region, region, the the ta targetrget sample sample region, region, and and the the background background samplesample region region of of each each neural-network neural-network region. region. Step 3: Lc-MNN classification and post-processing. After the above two steps, the training sample and to-be-classified pixel in the i-th neural network region can be represented by 11-dimensional features. Use training samples to train the neural network to obtain the neural network classifier, and then the feature vector of the Appl. Sci. 2021, 11, x FOR PEER REVIEW 9 of 21
pixel to be classified is input into the classifier to obtain the classification result of Appl. Sci. 2021, 1, 1825 whether each pixel belongs to the target region. This process9 ofmay 22 produce unconnected areas or holes in the area, but these artifacts may be eliminated through regional operations to obtain the segmentation results.
(a) (b)
Figure 5. Multi-scaleFigure image 5. Multi-scale feature structure: image feature (a) pyramid structure: structure (a) pyramid of the images; structure (b) cuboid of the images;structure ( bof) the cuboid images. structure of the images. The framework of the algorithm is shown in Figure 6. Step 3: Lc-MNN classification and post-processing. After the above two steps, the training sample3.4. Feature and to-be-classified Selection and Extraction pixel in the i-th neural network region can be rep- resentedTo byaccurately 11-dimensional identify features. the type Useof bearing training surface samples defects, to train it is the necessary neural to select networkefficient to obtainclassification the neural features. network We classifier, proposed and the then comprehensive the feature vector use of of the a practical pixelalgorithm to be classified that combines is input scalar into thefeature classifier selectio to obtainn [25–31], the classificationcorrelation analysis result of[27,28], and whethervector eachselection pixel [29,30], belongs abbreviated to the target as region. the SCV This algorithm. process may produce uncon- nectedThe areas SCV or holesfeature in selection the area, algorithm but these is artifacts mainly maydivided be eliminated into the following through four steps: regional operations to obtain the segmentation results. Step 1: Establish a feature pool. Collect as many features of the classified object as possible The framework of the algorithm is shown in Figure6. and combine them into a set of candidate features. 3.4. FeatureStep Selection 2: Perform and Extraction data acquisition and processing. Complete the conversion of a sample from an image to a normalized feature vector through the steps of image To accurately identify the type of bearing surface defects, it is necessary to select acquisition, image processing, and feature calculation. efficient classification features. We proposed the comprehensive use of a practical algorithm Step 3: Set a target of dimension reduction. Make a comprehensive determination of the that combines scalar feature selection [25–31], correlation analysis [27,28], and vector number of features to be used for classification based on the dimension of features selection [29,30], abbreviated as the SCV algorithm. in the feature pool, peak phenomenon, and number of training samples. The SCV feature selection algorithm is mainly divided into the following four steps: Step 4: Perform SCV feature selection. Based on the sample data and the dimensionality Step 1: Establish areduction feature pool. target, Collect use asthe many designed features feature of the selection classified algorithm object as possible to select a set of and combinefeatures them from into the a set feature of candidate pool with features. good classification performance. First, rank the Step 2: Perform data acquisition and processing. Complete the conversion of a sample features by their classification performance, from good to poor, based on the from an image to a normalized feature vector through the steps of image acquisi- separability criterion to realize the conversion from feature pool 𝑋 to feature tion, image processing, and feature calculation. vector 𝑥(𝑑). Second, perform correlation analysis between features to eliminate Step 3: Set a target of dimension reduction. Make a comprehensive determination of the strongly correlated features and reduce the dimensionality of the feature vector number of features to be used for classification based on the dimension of features from 𝑥(𝑑) to 𝑥(𝑑 ). Then, carry out a scalar feature selection on 𝑥(𝑑 ), i.e., select in the feature pool, peak phenomenon, and number of training samples. only the first 𝑑 features from 𝑥(𝑑 ) to form a new feature vector 𝑥(𝑑 ) to Step 4: Perform SCV feature selection. Based on the sample data and the dimensionality complete the second dimensionality reduction. Finally, perform a vector feature reduction target, use the designed feature selection algorithm to select a set of selection on 𝑥(𝑑 ) to select the optimal classification set of 𝑑 features from features from the feature pool with good classification performance. First, rank 𝑥(𝑑 ) to obtain the final classification feature vector (𝑑 ). the features by their classification performance, from good to poor, based on the separability criterion to realize the conversion from feature pool X to feature vector x(d). Second, perform correlation analysis between features to eliminate strongly correlated features and reduce the dimensionality of the feature vector from x(d) to x(d1). Then, carry out a scalar feature selection on x(d1), i.e., select only the first d2 features from x(d1) to form a new feature vector x(d2) to complete the second dimensionality reduction. Finally, perform a vector feature selection on x(d2) to 0 select the optimal classification set of d features from x(d2) to obtain the final classification feature vector (d0). Appl. Sci. 2021, 1, 1825 10 of 22 Appl. Sci. 2021, 11, x FOR PEER REVIEW 10 of 21
FigureFigure 6. 6.The The framework framework of theof the multi-neural multi-neural network network segmentation segmentation algorithm. algorithm.
Appl. Sci. 2021, 11, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/applsci Appl. Sci. 2021, 1, 1825 11 of 22
According to the above algorithm design, we established a feature selection criterion based on the correlation coefficient. The specific algorithm uses the Fisher criterion for scalar feature selection and Jl as the selection criterion for the feature vector. According to the algorithm procedure, assume that a feature pool, X, has been es- tablished, a standardized training sample set, T, has been obtained, and a dimensionality reduction goal, d0, has been established. Here, the feature pool has d features, expressed as X = {xl}, (l = 1, 2, . . . , d). Moreover, T has c types (c ≥ 3), designated as ω1, ω2, ... , ωc, respectively, with each type ωi having Ni samples, for a total of N training samples. The aim of d0 is to choose d0 features as the final classification features from the feature pool of d features. The specific steps are: Step 1: Perform Fisher discrimination ratio (FDR) feature sequence. The features in the feature pool exist in an aggregate form. For the ease of representation and com- putation, all features need to be sequenced in a certain order to form a feature vector. They may be ordered according to the FDR in descending order. FDR is used to characterize the separability of a single feature. The larger the FDR value, the better the separability of the feature.
For any feature xl in the feature pool, its FDR value [46] is
(i) (j)2 d d µl − µl F (x ) = , (1) DR l ∑ ∑ (i)2 (j)2 i j6=i σl + σl
1 Ni (i) = (i) µl ∑ xl,k , (2) Ni k=1
Ni (i) 1 (i) (i)2 σ 2 = x − µ , (3) l − ∑ l,k l Ni 1 k=1 (i) (i) Here, xl,k is the k-th sample of the ωi type represented by the xl feature, µl is the (i)2 average value of feature xl of type ωi, and σl is the variance of feature xl of type ωi. By carrying out the above calculation for all features in X, one can obtain the FDR values for all the features. By rearranging the FDR values according to their magnitude, T one can obtain the feature vector x = (x1, x2,..., xd) .
Step 2: Perform correlation feature selection. Arrange features xl of all the samples in training sample set T according to the sequence to form vector xl. As each sample has d features, it is possible to generate d vectors of this kind. The correlation coefficient between any two vectors xi and xj is: