<<

applied sciences

Article Detection and Classification of Bearing Surface Defects Based on Vision

Manhuai Lu 1,* and Chin-Ling Chen 2,3,4,*

1 College of Mechanical and Electrical Engineering, University of Electronic Science and of China, Zhongshan Institute, Zhongshan 528400, China 2 College of Computer and Information Engineering, Xiamen University of Technology, Xiamen 361024, China 3 School of Information Engineering, Changchun Sci-Tech University, Changchun 130600, China 4 Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan * Correspondence: [email protected] (M.L.); [email protected] (C.-L.C.)

Abstract: Surface defects on bearings can directly affect the service life and reduce the performance of equipment. At present, the detection of bearing surface defects is mostly done manually, which is labor-intensive and results in poor stability. To improve the inspection speed and the defect recogni- tion rate, we proposed a bearing surface defect detection and classification method using machine vision technology. The method makes two main contributions. It proposes a local multi-neural network (Lc-MNN) image segmentation algorithm with the wavelet transform as the classification feature. The precision segmentation of the defect image is accomplished in three steps: wavelet feature extraction, Lc-MNN region division, and Lc-MNN classification. It also proposes a feature selection algorithm (SCV) that makes comprehensive use of scalar feature selection, correlation analysis, and vector feature selection to first remove similar features through correlation analysis,   further screen the results with a scalar feature selection algorithm, and finally select the classification features using a feature vector selection algorithm. Using 600 test samples with three types of defect Citation: Lu, M.; Chen, C.-L. in the experiment, an identification rate of 99.5% was achieved without the need for large-scale Detection and Classification of Bearing Surface Defects Based on calculation. The comparison tests indicated that the proposed method can achieve efficient feature Machine Vision. Appl. Sci. 2021, 1, selection and defect classification. 1825. https://doi.org/10.3390/ app11041825 Keywords: computer monitoring and production control; bearing surface inspection; feature selec- tion; defect classification; the use of artificial intelligence in industry Academic Editor: Albert Smalcerz

Received: 28 January 2021 Accepted: 15 February 2021 1. Introduction Published: 18 February 2021 1.1. Background Bearings are important components of mechanical equipment. They can convert the Publisher’s Note: MDPI stays neutral direct from parts in relative rotation into friction or sliding friction of the with regard to jurisdictional claims in bearing, hence reducing the friction coefficient and ensuring the long-term stable operation published maps and institutional affil- of the machine. The bearing surface, the critical part in direct contact with the rotating iations. parts, has an important impact on the installation performance, usage performance, quality, and life of the bearing. If there are defects on the bearing surface, such as wear, cracks, bruises, pitting, scratches, or deformation, they can to machine vibration and noise, accelerate the oxidation and wear of the bearing [1], and even damage the machine. It is Copyright: © 2021 by the authors. thus necessary to inspect the surface of the bearing to prevent defective products from Licensee MDPI, Basel, Switzerland. entering the market. The assembly of bearings here and abroad has been fully automated This article is an open access article for the most part, but the inspection of the bearing surface before and after assembly is distributed under the terms and still based on the naked eye of the workers. Such an inspection method is labor-intensive, conditions of the Creative Commons Attribution (CC BY) license (https:// low-efficiency, high-cost, and easily affected by such factors as inspector qualification and creativecommons.org/licenses/by/ experience, visual resolution of the naked eye, and fatigue. A new detection method is, 4.0/). therefore, urgently needed to replace the traditional naked-eye detection method.

Appl. Sci. 2021, 1, 1825. https://doi.org/10.3390/app11041825 https://www.mdpi.com/journal/applsci Appl. Sci. 2021, 1, 1825 2 of 22

Machine vision inspection technology, with its high speed, a high degree of automa- tion, and non-destructive quality, has rapidly developed in recent years. Therefore, it is naturally advantageous to apply machine vision to the inspection of surface defects on bearings [2]. Under the backdrop of China’s Industry 4.0 aim to upgrade the country’s manufacturing industry, and the production needs of bearing manufacturing enterprises, now is the time to investigate bearing surface defect detection and to study inspection and classification systems based on machine vision. In-depth analysis of weak links, including the acquisition of high-definition images of the bearing surface, precision segmentation of defect areas, and selection of defect classification features, can provide efficient and auto- mated detection methods for detecting exterior defects on bearings, which are important for improving bearing manufacturing.

1.2. Current Status and Bottlenecks in the Detection and Classification of Bearing Surface Defects The bearing surface defect detection system based on machine vision is a complex system that touches upon many fields, including mechanical design, automatic control, computer applications, image acquisition, image processing, image analysis, image inter- pretation, and pattern recognition. At present, the mainstream visual inspection process for bearing surface defects generally consists of image preprocessing, region-of-interest (ROI) extraction, and pattern recognition. With the continuous development of visual inspection technology for bearing surface defects, research in this area has been successful. For ex- ample, Hemmati et al. [3] designed a new signal-processing algorithm and used acoustic emission technology to measure the size of bearing surface defects. Bastami et al. [4] used autoregressive models and envelope analysis to enhance the features when a rolling object enters and exits a defect region and used the duration between two flawed events to esti- mate the size of a defect. Sobie et al. [5] generated training data using information obtained from a high-resolution simulation of rolling bearing dynamics and applied the data to train machine learning algorithms for bearing defect detection. Zheng et al. [6] developed a set of metal surface visual inspection experimental equipment based on genetic algorithms and image morphology. Furthermore, Pernkopf et al. [7] proposed three image acquisition schemes suitable for metal surface defect detection. Phung et al. [8] studied the pitting that often occurs on rolling bearings, and chose the area of the defect region, elongation, thickness, roundness, and smoothness of the edge as classification characteristics and as inputs to a neural network to finally identify the type of defect. Kunakornvon et al. [9] investigated exterior defects generated in service, such as scratches, and focused on the analysis of geometric characteristics, the shape moment characteristics of binary images of surface defects, and the selection of neural network parameters in the classification of surface defects. Chen et al. [10] studied manufacturing defects such as striping on steel ball bearings, took integrated entropy as the criterion for whether defects exist on a steel , and designed a linear classifier to classify defect types based on defect area, shape factor, aspect ratio, roundness similarity, rectangular similarity, and direction angle. Mikołajczyk, Nowicki et al. [11] used neural network classifier based on single classification to process the data of tool image, proposed a method to determine the tool wear rate based on image analysis, discussed the evaluation of errors, and implemented a special neural wear software in visual basic to analyze the wear position of cutting edge. This work creates a new way for us to use neural network to detect the surface defects of bearings. Van et al. [12] conducted a visual inspection study on the surface defects of the anti-friction coating on a sliding bearing working surface and classified the defects using 12-dimensional features including 5-dimensional geometry features, 3-dimensional shape features, and 4-dimensional texture features. Mikołajczyk, Nowicki, et al. [13] proposed a two-step method for automatic tool life prediction in the turning process. The development of image-recognition software and an artificial neural network model can be used as a use- ful industrial tool for low-cost tool life estimation in a turning operation. This conclusion makes us more confident to carry out our work. Appl. Sci. 2021, 1, 1825 3 of 22

To summarize, the image segmentation methods used in existing bearing surface defect detection systems are mostly based on traditional methods of threshold segmentation and edge detection or the improved versions of the algorithms. Such algorithms, in their pursuit of inspection speed, do not have high precision in extracting the defect region. Defects on the bearing surface are either micro-defects or defects of small size, so the aforementioned methods cannot meet the precision requirement. However, feature selection in current machine-vision-based systems for bearing surface defect detection and classification methods mostly rely on the designer’s experience or intuition and lack quantitative requirements for the selection and extraction of classification features. As a result, the selected features are heavily influenced by environmental changes, and the performance of the classifier cannot be scaled. This indicates that more research effort is needed for the image segmentation algorithm and feature selection algorithm in the area of bearing defect detection and identification, and an important consideration in defect classification is the selection of an appropriate classifier. We thus focused this research on image segmentation, feature selection, and classification algorithms.

1.3. Main Contents of Research The main effort of this work is related to defect detection and the design of classification algorithms, including the image segmentation algorithm and the feature selection algorithm. The proper selection of a classifier is also important for defect classification, so we focused this work on image segmentation, feature selection, and classification algorithms. (1) To improve the image segmentation accuracy of the machine vision system, we investigated a local multi-neural network image segmentation algorithm with the wavelet transform as the classification feature. After combining all the target pixels, some post- processing was performed, and the segmentation result was obtained. (2) We targeted the weak link of classification feature selection in existing machine- vision-based bearing surface defect identification, and studied how to effectively imple- ment feature selection, improve the accuracy of defect classification, and avoid large-scale calculations.

2. Related Work In the process of detecting and classifying bearing surface defects based on machine vision, image segmentation and feature selection have a great effect on the accuracy and efficiency of defect detection. Additionally, choosing an appropriate classifier is also important in defect classification. Previous researchers have done extensive work on defect image segmentation, defect feature selection and classification identification in different fields. Their work is referenced in the development of this work.

2.1. Defect Image Segmentation The accurate extraction of defects on the bearing surface relies mainly on image segmentation. The image segmentation method can be divided into edge segmentation, threshold segmentation, regional growth segmentation, and image segmentation based on a specific theory [14–17]. In edge segmentation, the edges are detected with the aid of first-order or second-order derivatives and then fitted to form a contour of the region before the image is segmented [18–20]. In threshold segmentation, an appropriate set of thresholds is obtained so that pixels of the image can be compared with the thresholds to group pixels with similar features in one category. The key is to choose a suitable set of thresholds based on the characteristics of the image [21–23]. Regional growth segmentation is a serial segmentation algorithm. We find one or more seed pixels, design an index for measuring the similarity between other pixels and the seed pixels, and then, based on this quantitative metric, determine whether to include a peripheral pixel into the area where the seed pixels are located. This iteration continues until there are no matching pixels. The result is the segmented region of the regional growth segmentation algorithm [24–26]. Appl. Sci. 2021, 1, 1825 4 of 22

Segmentation methods based on specific theories include watershed segmentation methods based on mathematical morphology and segmentation methods based on fuzzy theory, neural networks, graph theory, and models. This type of algorithm [27–30] involves complex calculations, and the segmentation effect varies from image to image. For online inspection of surface defects on bearings, it is difficult to process images in real-time. By combining the local binary fitting (LBF), energy function, and the modified Lapalcian- of-Gaussian (MLoG) approach, Biswas et al. [31] proposed an active contour model to reduce the sensitivity of the initial contour. Similarly, Tarkhaneh et al. [32] made a trade- off between the accuracy and efficiency of image segmentation using a new adaptive method and a new mutation strategy. Abd Elaziz et al. [33] proposed a group selection method for multi-threshold image segmentation. By selecting an appropriate number of group algorithms from 11 algorithms, they provided expert systems with problem-solving tools. Alroobaea et al. [34], by introducing a priori information into the model parameters and taking into account the uncertainty of model parameters, solved problems related to accurate data classification during image segmentation, such as the effective estimation of model parameters and the selection of optimal model complexity. Narisetti et al. [35] combined the adaptive threshold and morphological filtering to achieve a semi-automatic root image segmentation with an average dice coefficient of 0.82 and a Pearson coefficient of 0.8. Although some researchers have paid attention to the accuracy of defect segmentation and the problem of defect feature extraction and classification, during the inspection of bearing surface defects the defect area needs to be accurately segmented in real-time, and randomness and complexity coexist in collected images. As a result, a literature search has thus far failed to identify a general segmentation method with both high accuracy and efficiency. Given this, our study applies to the accurate segmentation of defect regions in bearing surface defect images and has moderate complexity.

2.2. Defect Feature Selection and Classification To classify bearing surface defects, one must first extract the features of the gray-scale image of the segmented defect, and then perform appropriate dimensionality reduction according to the dimension of the extracted defect features. A feature data set suitable for the pattern recognition classifier is then formed, and the defect is finally classified using the pattern recognition classifier. Many investigators have extracted features of research targets by combining geomet- ric features, gray-scale features, texture features, and projection features. For example, Zhang et al. [36] extracted a total of 25 features of defect images, including geometry, region, and texture, and moment invariants, to achieve the detection of wood defects. Lu M. et al. [37] propose a bearing defect classification network based on an autoencoder to enhance the efficiency and accuracy of bearing defect detection. Yan et al. proposed a support vector machine recursive feature elimination (SVM-RFE) method that is capable of lowering the overfitting probability and improving feature selection efficiency [38–41] by fully utilizing the information in the training set. Similarly, Zhao and Jia [42] proposed a curvelet-transform-based global and local embedded algorithm for nonlinear feature extraction of weldment defects. Its classification accuracy was shown to be better than that of the principal component analysis (PCA) and locally linear embedding (LLE) algorithms. Yildiz et al. [43] extracted defect images on the surface of woven fabric based on the tex- tures of the gray-scale co-occurrence matrix and used the K-nearest-neighbor algorithm to classify the defect images. Further, Dubey et al. [44] extracted the spatial and color features of the color images of fruit defects and used the K-means clustering algorithm to detect and classify fruit defects. Mu et al. [45] used PCA to extract the main components of weld defect images and used support vector to detect and classify weld defects. In summary, our search on feature extraction and classification and recognition meth- ods revealed that different feature extraction and classification and recognition methods all have their advantages and disadvantages, and the classification performance of the pattern Appl. Sci. 2021, 11, x FOR PEER REVIEW 5 of 21

extracted the spatial and color features of the color images of fruit defects and used the K- means clustering algorithm to detect and classify fruit defects. Mu et al. [45] used PCA to extract the main components of weld defect images and used support vector machines to detect and classify weld defects. In summary, our search on feature extraction and classification and recognition methods revealed that different feature extraction and classification and recognition methods all have their advantages and disadvantages, and the classification performance of the pattern recognition classifier that researchers pay attention to depends on the feature extraction algorithm, the classification algorithm, and the number of samples. Thus, there is no general feature extraction and classification algorithm for different types Appl. Sci. 2021of ,defect1, 1825 data. Therefore, this research is suitable for the feature extraction and5 of 22 classification algorithm for online detection of bearing surface defects. Based on specific characteristics of the defects, the number of extracted defect features, and the number of samples, and consideringrecognition the classifier actual detection that researchers cost, pay the attention goal of tothis depends study on was the to feature realize extraction an algorithm, the classification algorithm, and the number of samples. Thus, there is no efficient feature generalextraction feature and extraction classification and classification algorithm algorithm for foronline different detection types of defect and data. classification. Therefore, this research is suitable for the feature extraction and classification algorithm for online detection of bearing surface defects. Based on specific characteristics of the defects, 3. Materials and Methodsthe number of extracted defect features, and the number of samples, and considering the actual detection cost, the goal of this study was to realize an efficient feature extraction and The bearing classificationsurface defect algorithm detection for online and detection classification and classification. algorithm comprise the following five steps, as shown in Figure 1: 3. Materials and Methods Step 1: Perform image Thepreprocessing, bearing surface filtering, defect detection and correction. and classification algorithm comprise the follow- Step 2: Perform defecting five detection, steps, as shownposition in Figure the inspection1: object, and detect whether it has defects. Step 1: Perform image preprocessing, filtering, and correction. Step 3: Perform fineStep segmentation 2: Perform defect of the detection, defect area. position If a the defect inspection is detected, object, and the detectdefect whether area it will be precisely segmented,has defects. and the detailed features of the defect area will be Step 3: Perform fine segmentation of the defect area. If a defect is detected, the defect area retained to the greatestwill be extent precisely possible. segmented, and the detailed features of the defect area will be Step 4: Perform feature retainedselection. to the According greatest extent to possible.the sample data and dimensionality reduction goals,Step 4: usePerform the featuredesigned selection. feature According selection to thealgorithm sample data to andselect dimensionality a feature re- combination withduction good classification goals, use the designedperformance feature from selection the feature algorithm pool. to select a feature combination with good classification performance from the feature pool. Step 5: Perform defect classification. Based on the selected features, use pattern Step 5: Perform defect classification. Based on the selected features, use pattern recognition recognition technologytechnology to identify to identify the the type type of of defect.

Figure 1. Flow chart of the algorithm. Figure 1. Flow chart of the algorithm.

3.1. Image Preprocessing3.1. Image Preprocessing The basic principle of this step of the algorithm is to read the image to be inspected, first filtering the image to make the edges of the image smoother and more continuous, then performing edge detection on the image to detect the upper edge of the bearing, and finally adjusting the image according to the y-coordinate difference in edge pixel coordinates. The Appl. Sci. 2021, 11, x FOR PEER REVIEW 6 of 21

The basic principle of this step of the algorithm is to read the image to be inspected, Appl. Sci. 2021, 1, 1825 first filtering the image to make the edges of the image smoother and more continuous,6 of 22 then performing edge detection on the image to detect the upper edge of the bearing, and finally adjusting the image according to the y-coordinate difference in edge pixel coordinates.corresponding The column corresponding is adjusted column to position is adjusted the edgeto position line at the the edge same line y coordinateat the sameto y coordinatecorrect the distortion.to correct the The distortion. goal of the The image goal correction of the image is to correctcorrection the is distortion to correct of the distortionimage collected of the by image the line collected array camera by the to restore line array the original camera characteristics to restore ofthe the original image. characteristicsThe specific algorithm of the image. steps The are asspecific follows: algorithm steps are as follows: StepStep 1: PerformPerform a a3 3×× 3 median3 median value value filter filter on on the the distorted distorted image. image. StepStep 2: DetectDetect an an obvious obvious edge edge of of the the bearin bearingg ring ring with with the the Canny Canny op operatorerator to to serve as a referencereference line. line. StepStep 3: UsingUsing the the starting starting point point of of the the referenc referencee line line as asthe the reference reference point, point, calculate calculate the the y- coordinatey-coordinate difference difference between between each each pixel pixel point point of the of edge the edgeand the and reference the reference point inpoint the y-direction. in the y-direction. StepStep 4: KeepingKeeping the the reference reference point point unchanged, unchanged, cyclically cyclically move move the the remaining remaining columns columns of pixelsof pixels according according to the to magnitude the magnitude and sign and signof the of difference the difference to obtain to obtain a corrected a cor- image.rected image. For the images of the inner and outer sides of the bearing ring, a flute of the bearing For the images of the inner and outer sides of the bearing ring, a flute of the bearing may be chosen as the reference line, as lines marked by a star shown in Figure2a,c. Perform may be chosen as the reference line, as lines marked by a star shown in Figure 2a,c. the correction according to the above algorithm to obtain the corrected results, shown in Perform the correction according to the above algorithm to obtain the corrected results, Figure2b,d. shown in Figure 2b,d.

(a) (b)

(c) (d)

Figure 2. ComparisonComparison ofof side side images images before before and and after after correction: correction: (a) inner(a) inner side side image image before be correction;fore correction; (b) inner (b) sideinner image side imageafter correction; after correction; (c) outer (c) side outer image side beforeimage correction;before correction; (d) outer (d) side outer image side afterimage correction. after correction.

3.2.3.2. Defect Defect Detection Detection DefectDefect detection is is divided into two steps. Locate the position of the bearing image inin the the whole whole image image and and separate separate it it from from the the background background to to automatically automatically obtain obtain the the ROI. ROI. Then, use the defect detection algorithm to determine whether the bearing in the ROI is defective.defective.

3.2.1.3.2.1. Region-of-Interest Region-of-Interest (ROI) (ROI) Extraction Extraction TheThe bearing isis placedplaced flat flat on on the the turntable, turntable, the the linear linear array array camera camera is parallel is parallel to the to axisthe axisof the of bearing,the bearing, the turntable the turntable rotates rotates at a constant at a constant speed, speed, and the and linear the linear array array camera camera scans scansthe outer the outer ring of ring the of bearing the bearing synchronously. synchronously. The acquiredThe acquired bearing bearing image image is divided is divided into intothree three areas: areas: the bottom the bottom turntable turntable area, the area central, the bearing central area, bearing and thearea, upper and background the upper backgroundarea. As the threearea. As areas the are three distributed areas are along distributed the y-axis, along they the are y-axis, well they aligned are well in the alignedx-axis, inthe the gray x-axis, level the of gray the level black of background the black background in the upper in the part upper is low, part and is low, the grayand the level gray of levelthe turntable of the turntable image in image the bottom in the part bottom is high, part where is high, the featureswhere the are features more pronounced are more and relatively stable, and it is easy to detect them by using the feature region localization algorithm. The design algorithm goes as follows: Step 1: Perform a mean value filter on the image. Step 2: Convert the filtered image into a binary image. Appl. Sci. 2021, 11, x FOR PEER REVIEW 7 of 21

pronounced and relatively stable, and it is easy to detect them by using the feature region Appl. Sci. 2021, 1, 1825 localization algorithm. The design algorithm goes as follows: 7 of 22 Step 1: Perform a mean value filter on the image. Step 2: Convert the filtered image into a binary image. Step 3: BasedBased on on local local features features of of the the binary binary image, image, select select the the dark dark background background portion of thethe bearing bearing image. image. Step 4: UsingUsing the the smallest smallest circ circumscribedumscribed rectangle, rectangle, determine determine the the coordinate coordinate of the center ofof the the dark dark background. background. Step 5: BasedBased on on the the size size of of the the bearing bearing imag imagee at at the the center, center, establish establish a rectangular model forfor the the central central bearing bearing region. region. Step 6:6: UsingUsing the the position position information information of of th thee dark background region and its relative positionposition to to the the central central bearing bearing image, image, locate locate the the rectangular rectangular model of the bearing andand obtain obtain the the ROI ROI region. region. Figure3 shows the processing procedure on a ground waste sample according to the stepsFigure described 3 shows above. the processing procedure on a ground waste sample according to the steps described above.

(a) (b)

(c) (d)

(e)

Figure 3. Feature location region-of-interes region-of-interestt (ROI) extraction process: ( a) original image; ( b) filtered filtered image; ( c) region after conversion into binary; (d) dark background region;region; ((ee)) ROIROI extractionextraction result.result.

3.2.2. Defect Detection Algorithm 3.2.2. Defect Detection Algorithm The structural characteristics of the bearing determine the high level of gray-scale consistencyThe structural of the images characteristics along the of x the-axis, bearing but the determine appearance the of high defects level often of gray-scale disrupts thisconsistency consistency. of the Based images on along this, we the designedx-axis, but the the following appearance algorithm: of defects often disrupts this consistency. Based on this, we designed the following algorithm: Step 1: Perform a Gaussian filter on the image to eliminate the influence of noise, such as Step 1: Perform a Gaussian filter on the image to eliminate the influence of noise, such as fingerprints on the bearing surface, and generate a Gaussian-filtered image. fingerprints on the bearing surface, and generate a Gaussian-filtered image. Step 2: Calculate the average value of each row of pixels along the x-axis to generate an Step 2: Calculate the average value of each row of pixels along the x-axis to generate an average filtered image. average filtered image. Step 3: Form a difference value image using the difference between the Gaussian-filtered Step 3: Form a difference value image using the difference between the Gaussian-filtered image and the mean filtered image. image and the mean filtered image. Step 4: IdentifyIdentify the the defective defective region, region, which which is is the the region region in in the the difference difference image image where the amplitudeamplitude is is greater greater than than the the preset preset threshold. Figure 44 showsshows thethe detectiondetection resultsresults ofof imagesimages withwith aa certaincertain levellevel ofof noise.noise. Appl. Sci. 2021, 111,, 1825 x FOR PEER REVIEW 8 8of of 21 22

(a) (b)

(c) (d)

(e) (f)

Figure 4. Test results obtained using image detection algorithm for different defects: ( a)) original image image of of over-grinding; over-grinding; (b) detection result of over-grinding; ( c) original image of the scratch defect; ( d) detection results results of of the the scratch scratch defect; defect; ( (ee)) original image of the bruise defect; (f) detection resultsresults ofof thethe bruisebruise defect.defect.

3.3. Precision Segmentation of Defect Region 3.3. PrecisionGaussian Segmentation filter processing of Defect is Regionused in the defect detection process. The process eliminatesGaussian noise filter but processingalso blurs isthe used boundary in the defect of the detection defect, reduces process. the The accuracy process of elimi- the segmentationnates noise but of also the blurs defect the region, boundary and of can the defect,even affect reduces thethe classification accuracy of of the the segmenta- defect. Therefore,tion of the it defect is necessary region, andto perform can even accurate affect the segmentation classification of of the the region defect. obtained Therefore, above. it is necessaryTo address to perform this situation accurate based segmentation on previous of the research, region obtained we proposed above. a local multiple neuralTo network address (Lc-MNN) this situation image based segmentation on previous algorithm research, for we extracting proposed the a local features multiple with theneural wavelet network transform (Lc-MNN) to further image segmentationprocess the initial algorithm segmented for extracting images the to featuresimprove with the accuracythe wavelet of segmentation. transform to furtherThe application process theof the initial algorithm segmented was carried images out to based improve on the initiallyaccuracy segmented of segmentation. results. TheThe application algorithm comp of therises algorithm three steps: was carried feature out extraction based on with the theinitially wavelet segmented transform, results. region The algorithm division compriseswith Lc-MNN, three steps: and featureclassification extraction and with post- the processingwavelet transform, with Lc-MNN. region The division steps with are as Lc-MNN, follows. and classification and post-processing with Lc-MNN. The steps are as follows. Step 1: Wavelet transform feature extraction: perform a three-layer wavelet transform on Step 1: theWavelet original transform image to feature obtain extraction: a series of performhigh-frequency a three-layer and low-frequency wavelet transform images on the original image to obtain a series of high-frequency and low-frequency images stacked in a pyramid structure (as shown in Figure 5a). These images are expanded stacked in a pyramid structure (as shown in Figure5a). These images are expanded to the size of the original image by nearest-neighbor interpolation (as shown in to the size of the original image by nearest-neighbor interpolation (as shown in Figure 5b), together with the original image. Each pixel location is characterized Figure5b), together with the original image. Each pixel location is characterized by an 11-dimensional feature to form a feature vector for each pixel. by an 11-dimensional feature to form a feature vector for each pixel. Step 2: Lc-MNN region division: first, perform morphological processing of the image Step 2: Lc-MNN region division: first, perform morphological processing of the image using the boundary of the initial segmentation to form an overall to-be-classified using the boundary of the initial segmentation to form an overall to-be-classified region, a target sample region, and a background sample region centered on the region, a target sample region, and a background sample region centered on the boundary. Then, fit the boundary curve with an N-sided polygon and generate N boundary. Then, fit the boundary curve with an N-sided polygon and generate N symmetric rectangles centered on the N sides, with each rectangle representing a symmetric rectangles centered on the N sides, with each rectangle representing neural-network region. Finally, each rectangle is intersected with the overall to-be- a neural-network region. Finally, each rectangle is intersected with the overall classifiedto-be-classified region, region, the target the target sample sample region, region, and andthe background the background sample sample region region to obtainto obtain the theto-be-classified to-be-classified region, region, the the ta targetrget sample sample region, region, and and the the background background samplesample region region of of each each neural-network neural-network region. region. Step 3: Lc-MNN classification and post-processing. After the above two steps, the training sample and to-be-classified pixel in the i-th neural network region can be represented by 11-dimensional features. Use training samples to train the neural network to obtain the neural network classifier, and then the feature vector of the Appl. Sci. 2021, 11, x FOR PEER REVIEW 9 of 21

pixel to be classified is input into the classifier to obtain the classification result of Appl. Sci. 2021, 1, 1825 whether each pixel belongs to the target region. This process9 ofmay 22 produce unconnected areas or holes in the area, but these artifacts may be eliminated through regional operations to obtain the segmentation results.

(a) (b)

Figure 5. Multi-scaleFigure image 5. Multi-scale feature structure: image feature (a) pyramid structure: structure (a) pyramid of the images; structure (b) cuboid of the images;structure ( bof) the cuboid images. structure of the images. The framework of the algorithm is shown in Figure 6. Step 3: Lc-MNN classification and post-processing. After the above two steps, the training sample3.4. Feature and to-be-classified Selection and Extraction pixel in the i-th neural network region can be rep- resentedTo byaccurately 11-dimensional identify features. the type Useof bearing training surface samples defects, to train it is the necessary neural to select networkefficient to obtainclassification the neural features. network We classifier, proposed and the then comprehensive the feature vector use of of the a practical pixelalgorithm to be classified that combines is input scalar into thefeature classifier selectio to obtainn [25–31], the classificationcorrelation analysis result of[27,28], and whethervector eachselection pixel [29,30], belongs abbreviated to the target as region. the SCV This algorithm. process may produce uncon- nectedThe areas SCV or holesfeature in selection the area, algorithm but these is artifacts mainly maydivided be eliminated into the following through four steps: regional operations to obtain the segmentation results. Step 1: Establish a feature pool. Collect as many features of the classified object as possible The framework of the algorithm is shown in Figure6. and combine them into a set of candidate features. 3.4. FeatureStep Selection 2: Perform and Extraction data acquisition and processing. Complete the conversion of a sample from an image to a normalized feature vector through the steps of image To accurately identify the type of bearing surface defects, it is necessary to select acquisition, image processing, and feature calculation. efficient classification features. We proposed the comprehensive use of a practical algorithm Step 3: Set a target of dimension reduction. Make a comprehensive determination of the that combines scalar feature selection [25–31], correlation analysis [27,28], and vector number of features to be used for classification based on the dimension of features selection [29,30], abbreviated as the SCV algorithm. in the feature pool, peak phenomenon, and number of training samples. The SCV feature selection algorithm is mainly divided into the following four steps: Step 4: Perform SCV feature selection. Based on the sample data and the dimensionality Step 1: Establish areduction feature pool. target, Collect use asthe many designed features feature of the selection classified algorithm object as possible to select a set of and combinefeatures them from into the a set feature of candidate pool with features. good classification performance. First, rank the Step 2: Perform data acquisition and processing. Complete the conversion of a sample features by their classification performance, from good to poor, based on the from an image to a normalized feature vector through the steps of image acquisi- separability criterion to realize the conversion from feature pool 𝑋 to feature tion, image processing, and feature calculation. vector 𝑥(𝑑). Second, perform correlation analysis between features to eliminate Step 3: Set a target of dimension reduction. Make a comprehensive determination of the strongly correlated features and reduce the dimensionality of the feature vector number of features to be used for classification based on the dimension of features from 𝑥(𝑑) to 𝑥(𝑑 ). Then, carry out a scalar feature selection on 𝑥(𝑑 ), i.e., select in the feature pool, peak phenomenon, and number of training samples. only the first 𝑑 features from 𝑥(𝑑 ) to form a new feature vector 𝑥(𝑑 ) to Step 4: Perform SCV feature selection. Based on the sample data and the dimensionality complete the second dimensionality reduction. Finally, perform a vector feature reduction target, use the designed feature selection algorithm to select a set of selection on 𝑥(𝑑 ) to select the optimal classification set of 𝑑 features from features from the feature pool with good classification performance. First, rank 𝑥(𝑑 ) to obtain the final classification feature vector (𝑑). the features by their classification performance, from good to poor, based on the separability criterion to realize the conversion from feature pool X to feature vector x(d). Second, perform correlation analysis between features to eliminate strongly correlated features and reduce the dimensionality of the feature vector from x(d) to x(d1). Then, carry out a scalar feature selection on x(d1), i.e., select only the first d2 features from x(d1) to form a new feature vector x(d2) to complete the second dimensionality reduction. Finally, perform a vector feature selection on x(d2) to 0 select the optimal classification set of d features from x(d2) to obtain the final classification feature vector (d0). Appl. Sci. 2021, 1, 1825 10 of 22 Appl. Sci. 2021, 11, x FOR PEER REVIEW 10 of 21

FigureFigure 6. 6.The The framework framework of theof the multi-neural multi-neural network network segmentation segmentation algorithm. algorithm.

Appl. Sci. 2021, 11, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/applsci Appl. Sci. 2021, 1, 1825 11 of 22

According to the above algorithm design, we established a feature selection criterion based on the correlation coefficient. The specific algorithm uses the Fisher criterion for scalar feature selection and Jl as the selection criterion for the feature vector. According to the algorithm procedure, assume that a feature pool, X, has been es- tablished, a standardized training sample set, T, has been obtained, and a dimensionality reduction goal, d0, has been established. Here, the feature pool has d features, expressed as X = {xl}, (l = 1, 2, . . . , d). Moreover, T has c types (c ≥ 3), designated as ω1, ω2, ... , ωc, respectively, with each type ωi having Ni samples, for a total of N training samples. The aim of d0 is to choose d0 features as the final classification features from the feature pool of d features. The specific steps are: Step 1: Perform Fisher discrimination ratio (FDR) feature sequence. The features in the feature pool exist in an aggregate form. For the ease of representation and com- putation, all features need to be sequenced in a certain order to form a feature vector. They may be ordered according to the FDR in descending order. FDR is used to characterize the separability of a single feature. The larger the FDR value, the better the separability of the feature.

For any feature xl in the feature pool, its FDR value [46] is

 (i) (j)2 d d µl − µl F (x ) = , (1) DR l ∑ ∑ (i)2 (j)2 i j6=i σl + σl

1 Ni (i) = (i) µl ∑ xl,k , (2) Ni k=1

Ni (i) 1  (i) (i)2 σ 2 = x − µ , (3) l − ∑ l,k l Ni 1 k=1 (i) (i) Here, xl,k is the k-th sample of the ωi type represented by the xl feature, µl is the (i)2 average value of feature xl of type ωi, and σl is the variance of feature xl of type ωi. By carrying out the above calculation for all features in X, one can obtain the FDR values for all the features. By rearranging the FDR values according to their magnitude, T one can obtain the feature vector x = (x1, x2,..., xd) .

Step 2: Perform correlation feature selection. Arrange features xl of all the samples in training sample set T according to the sequence to form vector xl. As each sample has d features, it is possible to generate d vectors of this kind. The correlation coefficient between any two vectors xi and xj is:

  Cov xi, xj ρ xi, xj = , (4) σxi σxj  h  i Cov xi, xj = E (xi − µxi ) xj − µxj , (5) r h 2i σxi = E (xi − µxi ) , (6)

µxi = E(xi). (7) Select the features for their correlation coefficient based on the sequence of the features and their correlation coefficient. First, eliminate features with correlation coefficients with x1 that are greater than the preset threshold. Then, use the remaining features to eliminate features with correlation coefficients that are greater than the threshold, until the last feature is reached. Through this screening, one obtains a feature vector x(d1) that has a low correlation with the various features, thus accomplishing the first dimensionality reduction. Appl. Sci. 2021, 1, 1825 12 of 22

Step 3: Perform FDR feature selection. After screening according to the correlation coeffi- cient, the remaining features not only retain the classification information of the original features but also greatly reduce the correlation between the features, hence satisfying the basic conditions of scalar feature selection. Moreover, as the features have already been sorted by FDR, so the first d2 features may be chosen directly to form a new feature vector x(d2) to achieve the second dimensionality reduction. Step 4: Perform vector feature selection. After the above two-dimensionality reduction operations, the number of features is sufficiently reduced for the normal use of the feature vector selection method. For example, if 10 of the 60 features are directly selected as the classification features, one must compute the criterion value 10 10 C60 = 7.5394 × 10 times, which is too large to implement. If there are 24 features after the two-dimensionality reductions, then selecting 10 features out of 24 will 10 6 only require computing the criterion value C24 = 1.9613 × 10 times, which is four orders of magnitude smaller. The feature vector selection is implemented in two steps: 0 (1) Select any d features from the d2 features in x(d2) as a set and calculate the Jl value of all combinations, namely, which is as follows [24]:

 −1  Jl = tr SW SB , (8)

c 1 Ni   T = (i) − (i) (i) − (i) ) SW ∑ p(ωi) ∑ xk m xk m , (9) i=1 Ni k=1 c T  (i)  (i)  SB = ∑ p(ωi) m − m m − m , (10) i=1 N p(w ) = i , (11) i N 1 Ni (i) = (i) m ∑ xk , (12) Ni k=1 c (i) m = ∑ p(ωi)m , (13) i=1 (i) where xk is the feature vector of the k-th sample of the ωi type. (2) Select the combination that maximizes the Jl value, realize the third dimensionality reduction, and obtain the final classification feature x(d0).

3.5. Defect Classification The classification of bearing surface defects is completed by the classifier, and an error back-propagation training algorithm is used to complete the non-linear mapping from the input signal to the output mode. The steps are as follows: Step 1: Choose an initial value of the weight coefficient. T T Step 2:I nputasample X = (x0, x1,..., xN−1) anditsexpectedoutput d = (d0, d1,..., dM−1) . T Step 3: Calculate the actual output y = (y0, y1,..., yM−1) . Step 4: Backward-adjust the weights from the output layer forward, layer by layer, using the adjustment formula [36]:

0 ωij(t + 1) = ωij(t) + ηδjxi, (14) Appl. Sci. 2021, 1, 1825 13 of 22

where ωij is the weight of the connection from node i in the hidden layer or input layer to 0 node j at time t, xi can either be the output of node i or the input of node i , η is the gain term, and δj is the error term of node j. If node j is an output layer node, then:   δj = yj 1 − yj dj − yj , (15)

where dj is the expected output of node j, and yj is the actual output of node j. If node j is on an intermediate hidden layer, then:

0 0 δj = xj 1 − xj ∑ δkωjk, (16) k

Here, k represents all the nodes on the layer before node j. To improve the convergence, the following formula may be used for weight update ∆ωij correction:

0  ωij(t + 1) = ωij(t) + ηδjxi + a ωij(t) − ωij(t − 1) , (17)

Here, 0 < a < 1; that is, a subsequent adjustment of the weight value takes into account the last update of weight value. Step 5: Go back to Step 3 and repeat the execution until the error requirements are met.

4. Experiment 4.1. Experimental Design The test object of this paper is the outer ring of 6204 deep groove ball bearings (6204 bearings, outer diameter 47 mm, inner diameter 20 mm, height 14 mm, weight 0.11 kg, Cr: 15.8KN, Cor: 7.88KN, DW: 9.525 mm, Z: 7). Based on the experimental requirements, the detection system consisted of a lower machine and the main computer. The lower machine included a programmable logic controller (PLC), DALSA linear array camera, linear light source, and self-made loading and unloading control mechanism. The main computer was a Dell Precision T3630 graphics workstation (Intel i7-10700K, 8-core, 16 lines, 3.8 G, 32 G memory, and a Windows operating system), The programming languages used were MATLAB and C++. The lower machine was responsible for digital IO (input and output) and motion control, and the main computer was responsible for image acquisition, image processing, and the resulting output. In operation, the bearing to be inspected was automatically placed on the turntable and centered, the image acquisition switch was turned on, the encoder drove the turntable to rotate at a constant preset speed, the line light source was turned on, and the line array camera automatically-collected images and sent them to the main computer, which performed various operations and processing on the image, positioned the image under test, detected and identified the inspected object, and generated motion control decisions based on the processing results. In the test of model 6204 bearing, three types of commonly occurring surface defects in the machining process of the outer ring, namely over-grinding, scratches, and bruises, were selected as identification objects. A total of 900 defect samples were collected, with 300 of each type of defect. Some examples of the defects are shown in Figure7. Appl. Sci. 2021, 11, x FOR PEER REVIEW 13 of 21

𝛿 =𝑦(1 − 𝑦 )(𝑑 −𝑦) , (15)

where 𝑑 is the expected output of node 𝑗, and 𝑦 is the actual output of node 𝑗. If node 𝑗 is on an intermediate hidden layer, then: 𝛿 =𝑥(1 − 𝑥) ∑ 𝛿 𝜔 , (16)

Here, 𝑘 represents all the nodes on the layer before node 𝑗 . To improve the convergence, the following formula may be used for weight update ∆𝜔 correction: 𝜔 (𝑡+1) =𝜔 (𝑡) +𝜂𝛿𝑥 +𝑎(𝜔 (𝑡) −𝜔 (𝑡−1)) , (17)

Here, 0 < 𝑎 < 1; that is, a subsequent adjustment of the weight value takes into account the last update of weight value. Step 5: Go back to Step 3 and repeat the execution until the error requirements are met.

4. Experiment 4.1. Experimental Design The test object of this paper is the outer ring of 6204 deep groove ball bearings (6204 bearings, outer diameter 47 mm, inner diameter 20 mm, height 14 mm, weight 0.11 kg, Cr: 15.8KN, Cor: 7.88KN, DW: 9.525 mm, Z: 7). Based on the experimental requirements, the detection system consisted of a lower machine and the main computer. The lower machine included a programmable logic controller (PLC), DALSA linear array camera, linear light source, and self-made loading and unloading control mechanism. The main computer was a Dell Precision T3630 graphics workstation (Intel i7-10700K, 8-core, 16 lines, 3.8 G, 32 G memory, and a Windows operating system), The programming languages used were MATLAB and C++. The lower machine was responsible for digital IO (input and output) and motion control, and the main computer was responsible for image acquisition, image processing, and the resulting output. In operation, the bearing to be inspected was automatically placed on the turntable and centered, the image acquisition switch was turned on, the encoder drove the turntable to rotate at a constant preset speed, the line light source was turned on, and the line array camera automatically-collected images and sent them to the main computer, which performed various operations and processing on the image, positioned the image under test, detected and identified the inspected object, and generated motion control decisions based on the processing results. In the test of model 6204 bearing, three types of commonly occurring surface defects Appl. Sci. 2021, 1, 1825 in the machining process of the outer ring, namely over-grinding, scratches, and bruises,14 of 22 were selected as identification objects. A total of 900 defect samples were collected, with 300 of each type of defect. Some examples of the defects are shown in Figure 7.

Appl. Sci. 2021, 11, x FOR PEER REVIEW 14 of 21 Appl. Sci. 2021, 11, x FOR PEER REVIEW 14 of 21

(a) (b) (c)

Figure 7. Examples of the defect images: ( (aa)) original original image image of of the the bruise bruise defect; defect; ( (bb)) original original image image of of the the scratch scratch defect; defect; ( (cc)) 4.2. Segmentation Experiment original image of over-grinding.4.2. Segmentation Experiment 4.2.1. Experiment Process 4.2.1. Experiment Process 4.2. SegmentationThis experiment Experiment was to verify that the Lc-MNN algorithm can effectively improve 4.2.1. ExperimentThis experiment Process was to verify that the Lc-MNN algorithm can effectively improve the accuracy of segmentation for images of bearing surface defects. The threshold theThis accuracy experiment of wassegmentation to verify that thefor Lc-MNN images algorithm of bearing can effectively surface improve defects. the The threshold accuracyalgorithm of segmentation has the advantages for images of bearingsimple surface calcul defects.ation, high The threshold operating algorithm efficiency, and strong hasversatility, the advantages and ofis simplewidely calculation, used. It high was operating used here efficiency, as a comparison and strong versatility, algorithm for the Lc- andMNN is widely algorithm. used. It was During used here the as experiment, a comparison algorithmthe maximum for the Lc-MNN error acceptance algorithm. rate in each During the experiment, the maximum error acceptance rate in each network training network training was set to 0.001. Figure 8 shows the original image to be segmented, wasFigure set to 9 0.001. shows Figure the8 standardshows the originalimage imageof manual to be segmented,segmentation, Figure Figure9 shows 10 the shows the multi- standardFigure image9 shows of manual the standard segmentation, image Figure of manual 10 shows segmentation, the multi-scale featureFigure of 10 the shows the multi- extractedscale feature original of image, the extracted and Figure original 11 shows image, the process andand and FigureFigure result 1111 of theshowsshows multi-region thethe processprocess andand resultresult division.of the multi-region The small region division. in Figure 11Thde starts small from region the small in Figure area in the11d upper starts right-hand from the small area in corner and is labeled a, b, and c in a clockwise direction. the upper right-hand corner and is labeled a, b, and c in a clockwise direction.

FigureFigure 8. Image8. Image to be to segmented. be segmented.

Figure 9. Standard segmented image. FigureFigure 9. Standard9. Standard segmented segmented image. image.

Figure 10. Multi-scale feature images. Appl. Sci. 2021, 11, x FOR PEER REVIEW 14 of 21

4.2. Segmentation Experiment 4.2.1. Experiment Process This experiment was to verify that the Lc-MNN algorithm can effectively improve the accuracy of segmentation for images of bearing surface defects. The threshold algorithm has the advantages of simple calculation, high operating efficiency, and strong versatility, and is widely used. It was used here as a comparison algorithm for the Lc- MNN algorithm. During the experiment, the maximum error acceptance rate in each network training was set to 0.001. Figure 8 shows the original image to be segmented, Figure 9 shows the standard image of manual segmentation, Figure 10 shows the multi- scale feature of the extracted original image, and Figure 11 shows the process and result of the multi-region division. The small region in Figure 11d starts from the small area in the upper right-hand corner and is labeled a, b, and c in a clockwise direction.

Figure 8. Image to be segmented.

Appl. Sci. 2021, 1, 1825 15 of 22

Figure 9. Standard segmented image.

Appl. Sci. 2021, 11, x FOR PEER REVIEW 15 of 21

To illustrate the classification process of each small region, the segmentation process of a small area, a, in Figure 11d was chosen for the detailed description. Figure 12a is an image of the region to be classified, Figure 12b is an image of the background region, and Figure 12c is an image of the target area. The neural network was established with MATLAB, and the corresponding training was carried out. Pixels in the region to be classified were classified using the trained neural network to obtain pixels belonging to the target region, as shown in Figure 12d. By performing this operation on all small regions, we obtained the target pixels in the region to be classified. By combining all the FigureFigure 10.10.Multi-scale Multi-scale featurefeature images.images. target pixels with the original region, the final segmentation result was obtained.

(a) (b)

(c) (d)

Figure 11. Region segmentation process with multi-neural network: (a) initial segmented region; (b) morphological processingFigure 11. result; Region (c) boundary segmentation polygon; process (d) multi-neural with multi-neural network region. network: (a) initial segmented region; (b) morphological processing result; (c) boundary polygon; (d) multi-neural network region. To illustrate the classification process of each small region, the segmentation process of a small area, a, in Figure 11d was chosen for the detailed description. Figure 12a is an image of the region to be classified, Figure 12b is an image of the background region, and Figure 12c is an image of the target area. The neural network was established with MATLAB, and the corresponding training was carried out. Pixels in the region to be

(a) (b)

(c) (d) Figure 12. Segmentation process of a small region, (a) small region a to be segmented; (b) background region in the small region a; (c) target region in the small region a; (d) segmentation result of the small region. Appl. Sci. 2021, 11, x FOR PEER REVIEW 15 of 21

To illustrate the classification process of each small region, the segmentation process of a small area, a, in Figure 11d was chosen for the detailed description. Figure 12a is an image of the region to be classified, Figure 12b is an image of the background region, and Figure 12c is an image of the target area. The neural network was established with MATLAB, and the corresponding training was carried out. Pixels in the region to be classified were classified using the trained neural network to obtain pixels belonging to the target region, as shown in Figure 12d. By performing this operation on all small regions, we obtained the target pixels in the region to be classified. By combining all the target pixels with the original region, the final segmentation result was obtained.

(a) (b)

Appl. Sci. 2021, 1, 1825 16 of 22

(c) (d) classified were classified using the trained neural network to obtain pixels belonging to the target region, as shown in Figure 12d. By performing this operation on all small regions, Figure 11. Region segmentationwe obtained process the target with pixels multi-neural in the region network: to be classified. (a) initial Bysegmented combining region; all the target (b) morphological processingpixels with result; the original (c) boundary region, thepolygon; final segmentation (d) multi-neural result network was obtained. region.

(a) (b)

(c) (d) FigureFigure 12. Segmentation12. Segmentation process process of a small of region, a small (a) smallregion, region (a) asmall to be segmented;region a to (b be) background segmented; region (b) in the small regionbackground a; (c) target region region inin thethe small small region region a; (d a;) segmentation (c) target region result ofin the the small small region. region a; (d) segmentation result of the small region. 4.2.2. Discussion of Results This experiment used the pixel number error criterion, which is defined as follows [32]:

PE = P(o) × P(b|o) + P(b) × P(o|b), (18)

where P(b|o) is the probability that the object is incorrectly classified as the background, P(o|b) is the probability that the background is incorrectly classified as the object, and P(o) and P(b) represent the a priori probability of the proportion of the object and the background in the image, respectively. The smaller PE was, the fewer pixels were misclas- sified, and the higher the accuracy of image segmentation. Figure 13 shows the results obtained from the Lc-MNN segmentation algorithm and those obtained from the threshold segmentation algorithm, as well as the difference from the standard image. Appl. Sci. 2021, 11, x FOR PEER REVIEW 16 of 21

4.2.2. Discussion of Results This experiment used the pixel number error criterion, which is defined as follows [32]: 𝑃𝐸 =𝑃(𝑜) ×𝑃(𝑏|𝑜) +𝑃(𝑏)×𝑃(𝑜|𝑏) , (18)

where 𝑃(𝑏|𝑜) is the probability that the object is incorrectly classified as the background, 𝑃(𝑜|𝑏) is the probability that the background is incorrectly classified as the object, and 𝑃(𝑜) and 𝑃(𝑏) represent the a priori probability of the proportion of the object and the background in the image, respectively. The smaller PE was, the fewer pixels were Appl. Sci. 2021, 1, 1825misclassified, and the higher the accuracy of image segmentation. Figure 13 shows the 17 of 22 results obtained from the Lc-MNN segmentation algorithm and those obtained from the threshold segmentation algorithm, as well as the difference from the standard image.

(a) (b)

(c) (d)

Appl. Sci. 2021, 11, x FOR PEER REVIEW 17 of 21

(e) (f)

(g) (h)

Figure 13. ComparisonFigure 13. Comparison of segmentation of segmentation results between results the between local multi-neural the local multi-neural network (Lc-MNN)network (Lc-MNN) algorithm and the threshold algorithm:algorithm (a )and Lc-MNN the threshold algorithm algorithm: segmentation (a) Lc-MNN result; ( balgorithm) threshold segmentation algorithm segmentation result; (b) threshold result; (c ) Lc-MNN algorithm segmentation result; (c) Lc-MNN algorithm segmented region; (d) threshold algorithm algorithm segmented region; (d) threshold algorithm segmented region; (e) Lc-MNN algorithm b|o ;(f) threshold algorithm segmented region; (e) Lc-MNN algorithm 𝑏|𝑜; (f) threshold algorithm 𝑏|𝑜; (g) Lc-MNN algorithm b|o ;(g) Lc-MNN algorithm o|b ;(h) threshold algorithm o|b . 𝑜|𝑏; (h) threshold algorithm 𝑜|𝑏. The calculated values of the two methods are listed in Table1. The calculated values of the two methods are listed in Table 1.

Table 1. Lc-MNN segmentation algorithm and threshold segmentation algorithm PE values.

Algorithm b|o o|b P(b|o) P(o|b) P(o) P(b) PE Threshold method [24] 11,172 2660 0.0537 0.0093 0.4222 0.5778 0.0281 Lc-MNN 2208 1220 0.0106 0.0043 0.4222 0.5778 0.0070

Table 1 lists the PE values computed from the two algorithms. As the results in Table 1 show, the segmented image using the Lc-MNN algorithm proposed in this work showed considerable improvement over the threshold segmentation algorithm both in terms of background misclassification and object misclassification. The PE value based on Lc- MNN algorithm was 75% less than that based on the threshold segmentation algorithm. Thus, the segmentation accuracy improved substantially, and the expected target was reached.

4.3. Defect Classification Experiment 4.3.1. Experimental Process Three sets of features were chosen using the random selection algorithm, and one set of features was extracted with the scalar feature selection algorithm and the algorithm proposed in this work, for a total of five sets of experiments. Then, 100 samples were randomly chosen from each defect category as training samples, each sample was assigned an ID number, and the remaining 600 samples were each assigned an ID number and used as test samples. The 57 features listed in Table 1 were treated as original features. The target for dimensionality reduction was set to 10 features chosen from 60 features. Images were collected with a linear camera in a scanning mode, the defective regions were segmented using the dynamic threshold segmentation algorithm, and a component whitening algorithm was used for normalization processing. Based on 300 training samples, features were selected using the random extraction algorithm, the scalar feature selection algorithm, and the algorithm from this paper. The neural network structure was determined according to the dimension of the input features, categories of the output, and other factors. The selected features were then used to train multiple neural networks with the same structure, and the trained neural networks were used to identify the test samples. The identification rate of each method was obtained and used in the comparison study.

Appl. Sci. 2021, 1, 1825 18 of 22

Table 1. Lc-MNN segmentation algorithm and threshold segmentation algorithm PE values.

Algorithm b|o o|b P(b|o) P(o|b) P(o) P(b) PE Threshold method [24] 11,172 2660 0.0537 0.0093 0.4222 0.5778 0.0281 Lc-MNN 2208 1220 0.0106 0.0043 0.4222 0.5778 0.0070

Table1 lists the PE values computed from the two algorithms. As the results in Table1 show, the segmented image using the Lc-MNN algorithm proposed in this work showed considerable improvement over the threshold segmentation algorithm both in terms of background misclassification and object misclassification. The PE value based on Lc-MNN algorithm was 75% less than that based on the threshold segmentation algorithm. Thus, the segmentation accuracy improved substantially, and the expected target was reached.

4.3. Defect Classification Experiment 4.3.1. Experimental Process Three sets of features were chosen using the random selection algorithm, and one set of features was extracted with the scalar feature selection algorithm and the algorithm proposed in this work, for a total of five sets of experiments. Then, 100 samples were randomly chosen from each defect category as training samples, each sample was assigned an ID number, and the remaining 600 samples were each assigned an ID number and used as test samples. The 57 features listed in Table1 were treated as original features. The target for dimensionality reduction was set to 10 features chosen from 60 features. Images were collected with a linear camera in a scanning mode, the defective regions were segmented using the dynamic threshold segmentation algorithm, and a component whitening algorithm was used for normalization processing. Based on 300 training samples, features were selected using the random extraction algorithm, the scalar feature selection algorithm, and the algorithm from this paper. The neural network structure was determined according to the dimension of the input features, categories of the output, and other factors. The selected features were then used to train multiple neural networks with the same structure, and the trained neural networks were used to identify the test samples. The identification rate of each method was obtained and used in the comparison study.

4.3.2. Discussion of Results The test results are shown in Table2. The table shows that the identification rates were low and quite unstable for the three groups of random algorithm tests; the highest identification rate reached was 82.2%, and the lowest identification rate was only 19.0%. The scalar feature selection experiment conducted with the FDR criterion did not achieve good results; the identification rate was only 66.7%. Further analysis revealed that the classification error was mainly misidentifying all 200 over-grinding defects like scratches, but the other two categories were both identified correctly. This result confirmed a flaw of the algorithm itself: the individually selected features have good separability, but there may be a strong correlation between the features, which reduced the overall classification ability of the feature combination and led to misidentification. In comparison, the algorithm of this paper achieved good results. Among the 600 test samples, only three were misidentified: The scratch defect of #402 was misidentified as a bruise, and the scratch defects of #492 and #600 were misidentified as over-grinding, as shown in Figure 14. The remaining samples were all correctly identified, and the identification rate was as high as 99.5%, reaching the expected goal. Table3 shows the rest of the experimental data, including the five sets of data for each type of defect and the sample data (marked in gray) that were incorrectly identified. Appl. Sci. 2021, 11, x FOR PEER REVIEW 18 of 21

4.3.2. Discussion of Results The test results are shown in Table 2. The table shows that the identification rates were low and quite unstable for the three groups of random algorithm tests; the highest identification rate reached was 82.2%, and the lowest identification rate was only 19.0%. The scalar feature selection experiment conducted with the FDR criterion did not achieve good results; the identification rate was only 66.7%. Further analysis revealed that the classification error was mainly misidentifying all 200 over-grinding defects like scratches, but the other two categories were both identified correctly. This result confirmed a flaw of the algorithm itself: the individually selected features have good separability, but there may be a strong correlation between the features, which reduced the overall classification ability of the feature combination and led to misidentification. In comparison, the algorithm of this paper achieved good results. Among the 600 test samples, only three were misidentified: The scratch defect of #402 was misidentified as a bruise, and the scratch defects of #492 and #600 were misidentified as over-grinding, as shown in Figure 14. The remaining samples were all correctly identified, and the identification rate was as

Appl. Sci. 2021, 1, 1825 high as 99.5%, reaching the expected goal. Table 3 shows the rest of the experimental data,19 of 22 including the five sets of data for each type of defect and the sample data (marked in gray) that were incorrectly identified.

TableTable 2. Comparison 2. Comparison of the of identification the identification rate of rate each of feature each feature selection selection algorithm. algorithm.

AlgorithmAlgorithm ID ID Number Number of SelectedSelected Features Features Identification Identification Rate Rate 33 66 1111 1616 22 22 32 34 34 35 35 40 40 55 55 19.0% 19.0% RandomRandom algorithm algorithm [14 [14]] 44 77 1616 3030 31 31 37 45 45 51 51 52 52 53 53 82.2% 82.2% 77 88 1313 1616 33 33 37 41 41 43 43 52 52 53 53 76.3% 76.3% FDRFDR algorithm algorithm [46 [46]] 2 2 66 99 15 15 16 16 20 21 21 22 22 26 26 42 42 66.7% 66.7% AlgorithmAlgorithm of of this this paper paper (SCV 66 77 99 13 13 14 14 17 21 21 29 29 46 46 57 57 99.5% 99.5% (SCValgorithm) algorithm)

(a) (b) (c)

FigureFigure 14. 14. MisclassifiedMisclassified sample: sample: (a ()a scratch) scratch defect, defect, #402; #402; (b ()b scratch) scratch defect, defect, #492; #492; (c ()c scratch) scratch defect, defect, #600. #600.

TableTable 3. 3. InputInput and and output output values values of of the the back-propagation back-propagation neural neural ne networktwork of of some some of of the the samples samples detected detected by by the the proposed proposed algorithm.algorithm.

BP InputBP InputValue Value BP Output BP Output Value Value No ExpectedExpected No. 7 13 14 17 21 29 47 57 Over Bruis Scratc 3 3 7 2 2 13 14 17 21 29 47 57 Over Output Value . 6 (10 ) 6 (10 ) 9 (10−1 ) 9 (10 ) 0 0 −1 2 9 0 1 Bruise ScratchOutput Value (10−1) (10 ) (100) (10(10)0) (10(10)−1) (10(10)2) (10(10) 9) (10(−)100)( −10(10) 1) (10Grinding) Grinding e h 1 2.1734 9.7401 4.3716 2.6493 1.0244 3.6416 2.8420 1.5058 1.0000 2.3881 1.0000 0.0000 0.0000 100 1 2.17342 2.14779.7401 4.3716 9.1473 2.6493 4.3246 1.0244 2.7363 1.05303.6416 3.63172.8420 2.73421.5058 1.27331.0000 1.0000 2.3881 2.0657 1.0000 1.0000 0.0000 0.0000 0.0000 0.0000 100 100 2 2.14773 2.88529.1473 4.3246 9.6369 2.7363 6.0042 1.0530 3.3442 1.02483.6317 3.03502.7342 3.78341.2733 2.66331.0000 0.4195 2.0657 0.8168 1.0000 1.0000 0.0000 0.0000 0.0000 0.0000 100 100 4 2.2035 9.8101 4.1567 1.9901 1.0562 4.3653 2.7649 2.8164 0.4335 1.7146 1.0000 0.0000 0.0000 100 3 2.88525 2.23209.6369 6.0042 9.6413 3.3442 4.3310 1.0248 2.2425 1.05393.0350 4.48223.7834 2.96322.6633 2.32520.4195 1.0000 0.8168 0.9680 1.0000 9.9999 0.0000 0.0000 0.0000 0.0000 100 100 201 2.8465 9.7186 5.6528 2.2053 1.0060 4.3872 0.3908 0.0007 0.5272 1.5361 0.0000 1.0000 0.0000 010 4 202 2.2035 0.26909.8101 4.1567 9.7835 1.9901 4.7058 1.0562 1.4345 1.00284.3653 6.50292.7649 0.38622.8164 0.00130.4335 0.4283 1.7146 2.2192 1.0000 0.0000 0.0000 1.0000 0.0000 0.0000 100 010 5 203 2.2320 0.24419.6413 4.3310 9.4514 2.2425 4.8214 1.0539 2.3807 1.04134.4822 3.82062.9632 0.32022.3252 0.00031.0000 0.3973 0.9680 1.8550 9.9999 0.0000 0.0000 1.0000 0.0000 0.0000 100 010 204 0.2583 9.5497 4.5472 1.5582 1.0291 6.0010 0.3547 0.0008 0.3582 2.1065 0.0000 1.0000 0.0000 010 205 0.1974 9.7454 3.4700 1.4706 1.0183 6.4749 0.2754 0.0003 0.4261 2.4259 0.0000 1.0000 0.0000 010 411 2.2685 6.8706 2.4146 4.0845 1.1736 2.0472 1.2909 0.0333 1.0000 2.0641 0.0000 0.0000 1.0000 001 412 2.3251 5.8266 1.6706 1.3489 1.5147 3.0650 1.3463 0.1645 0.5711 1.9900 0.0000 0.0001 1.0000 001 413 3.8989 4.1859 4.6868 4.6220 2.6145 1.7812 2.1078 0.1467 0.8150 0.8293 0.0000 0.0000 1.0000 001 414 3.2565 6.1437 4.4819 4.6335 1.4137 2.1359 2.2692 0.2253 0.7318 0.5897 0.0011 0.0000 0.9961 001 415 1.5370 5.7187 1.3504 1.5234 1.5495 3.0972 1.0630 0.0476 0.8010 1.1973 0.0000 0.0000 1.0000 001 0. 402 1.3690 7.9615 1.3635 3.2840 1.0440 2.5276 0.8064 0.0073 0.5003 3.5863 0.0000 0.0744 001 8993 492 2.8178 8.2273 4.2256 2.3831 1.1124 3.9057 2.8422 1.6632 1.0000 0.3394 0.9669 0.0000 0.1408 001 600 3.6504 8.7064 4.3215 2.4014 1.0751 3.7314 2.8857 1.8399 1.0000 0.5205 0.9539 0.0000 0.1735 001

5. Conclusions We proposed a method for detecting and classifying bearing surface defects. The main content and innovations of this work were as follows: (1) A local multi-neural network algorithm (Lc-MNN) for image segmentation is pro- posed; the method includes three stages: wavelet feature extraction, Lc-MNN region division, and Lc-MNN classification. The classification features are obtained by ex- panding the images at the various layers of the wavelet transform. The Lc-MNN regional division divides the area near the initial segmentation boundary into a region of training samples and a region of samples to be classified, and a polygon fitting algorithm is used to divide the above area into multiple local areas. The Lc-MNN Appl. Sci. 2021, 1, 1825 20 of 22

classification process classifies the pixels in each region to be classified using the neu- ral networks within the region to discriminate target pixels and background pixels. After combining the target pixels obtained and performing some post-processing, one obtains the segmentation results of higher precision. The experiments indicated that the proposed algorithm can effectively improve the accuracy of segmentation, which is one of the innovations of the algorithm. (2) We proposed an SCV algorithm for feature selection. The algorithm first removes similar features through correlation analysis, further screens the results using the scalar feature selection algorithm, and finally uses the feature vector selection algo- rithm to select the final classification features. The experiments indicated that the SCV algorithm can effectively improve the classification accuracy and avoid large-scale computation, which is another innovation of the algorithm developed in this research. Through comparative experiments, and with the two improvements described above, we found that the identification rate reached 99.5% and large-scale calculations could be avoided in experiments on 600 test samples with three types of defect. In future work, we hope to add more sample images to the bearing surface defect data and improve the network structure for defect feature extraction. We also plan to study defect detection and classification and identification techniques based on deep learning and the automatic iteration of internet-based classifiers.

Author Contributions: M.L. and C.-L.C. made substantial contributions to the conception and design. M.L. was involved in drafting the manuscript. M.L. acquired data and analysis and conducted the interpretation of the data. The critically important intellectual content of this manuscript was revised by C.-L.C. All authors have read and agreed to the published version of the manuscript. Funding: This research was supported by the National Social Science Fund of China (20BGL141). Informed Consent Statement: This study is only based on theoretical basic research. It does not involve human subjects. Data Availability Statement: The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest: The authors declare no conflict of interest.

Abbreviations ROI Region of Interest LBF Local Binary Fitting MLoG Modified Lapalcian-of-Gaussian BP Back Propagation SVM-RFE Support Vector Machine Recursive Feature Elimination PCA Principal Component Analysis LLE Locally linear Embedding Lc-MNN Local Multiple Neural Network Algorithm SCV Scalar selection, Correlation analysis, Vector selection FDR Fisher Discrimination Ratio PLC Programmable Logic Controller

References 1. Shen, H.; Li, S.; Gu, D.; Chang, H. Bearing defect inspection based on machine vision. Measurement 2012, 45, 719–733. [CrossRef] 2. Malamas, E.N.; Petrakis, E.G.M.; Zervakis, M.; Petit, L.; Legat, J.-D. A survey on industrial vision systems, applications, and tools. Image Vis. Comput. 2003, 21, 171–188. [CrossRef] 3. Hemmati, F.; Miraskari, M.; Gadala, M.S. Application of wavelet packet transform in roller bearing fault detection and life estimation. J. Phys. Conf. Ser. 2018, 1074, 012142. [CrossRef] 4. Bastami, A.R.; Vahid, S. Estimating the size of naturally generated defects in the outer ring and roller of a tapered roller bearing based on autoregressive model combined with envelope analysis and discrete wavelet transform. Measurement 2020, 159, 107767. [CrossRef] 5. Sobie, C.; Freitas, C.; Nicolai, M. Simulation-driven machine learning: Bearing fault classification. Mech. Syst. Signal Process. 2018, 99, 403–419. [CrossRef] Appl. Sci. 2021, 1, 1825 21 of 22

6. Zheng, H.; Kong, L.X.; Nahavandi, S. Automatic inspection of metallic surface defects using genetic algorithms. J. Mater. Process. Technol. 2002, 125, 427–433. [CrossRef] 7. Pernkopf, F.; O’Leary, P. Image acquisition techniques for automatic visual inspection of metallic surfaces. NDT E Int. 2003, 36, 609–617. [CrossRef] 8. Phung, V.T.; Pacas, M. Senseless harmonic speed control and detection of bearing faults in repetitive mechanical systems. In Proceedings of the IEEE 3rd International Future Energy Electronics Conference and ECCE Asia, Kaohsiung, Taiwan, 3–7 June 2017; pp. 1646–1651. 9. Kunakornvong, P.; Sooraksa, P. A Practical Low-Cost Machine Vision Sensor System for Defect Classification on Air Bearing Surfaces. Sens. Mater. 2017, 29, 629–644. 10. Chen, Y.J.; Tsai, J.C.; Hsu, Y.C. A real-time surface inspection system for precision steel balls based on machine vision. Meas. Sci. Technol. 2016, 27, 74–100. [CrossRef] 11. Mikołajczyk, T.; Nowicki, K.; Kłodowski, A.; Pimenov, D.Y. Neural network approach for automatic image analysis of cutting edge wear. Mech. Syst. Signal Process. 2017, 88, 100–110. [CrossRef] 12. Van, M.; Kang, H.J. Bearing Defect Classification Based on Individual Wavelet Local Fisher Discriminant Analysis with Particle Swarm Optimization. IEEE Trans. Ind. Inform. 2017, 12, 124–135. [CrossRef] 13. Mikołajczyk, T.; Nowicki, K.; Bustillo, A.; Pimenov, D.Y. Predicting tool life in turning operations using neural networks and image processing. Mech. Syst. Signal Process. 2018, 104, 503–513. [CrossRef] 14. Guo, Y.; Jiao, L.; Wang, S.; Wang, S.; Liu, F. Fuzzy Sparse Autoencoder Framework for Single Image per Person Face Recognition. IEEE Trans. Cybern. 2018, 48, 2402–2415. [CrossRef][PubMed] 15. Chen, H.; Zhang, B.; Li, M.; Zhang, C. Surface defect detection of bearing roller based on image optical flow. Chin. J. Sci. Instrum. 2018, 39, 198–206. 16. Ciobanu, R.; Rizescu, D.; Rizescu, C. Automatic Sorting Machine Based on Vision Inspection. Int. J. Model. Optim. 2017, 7, 286–290. 17. Riggio, M.; Sandak, J.; Franke, S. Application of imaging techniques for detection of defects, damage and decay in timber structures onsite. Constr. Build. Mater. 2015, 101, 1241–1252. [CrossRef] 18. Martínez, S.S.; Vázquez, C.O.; García, J.G.; Ortega, J.G. Quality inspection of machined metal parts using an image fusion technique. Measurement 2017, 111, 374–383. [CrossRef] 19. Huang, D.; Liao, S.; Sunny, A.I.; Yu, S. A novel automatic surface scratch defect detection for fluid-conveying tube of Coriolis mass flow-meter based on 2D-direction filter. Measurement 2018, 126, 332–341. [CrossRef] 20. Chondronasios, A.; Popov, I.; Jordanov, I. Feature selection for surface defect classification of extruded aluminum profiles. Int. J. Adv. Manuf. Technol. 2015, 83, 33–41. [CrossRef] 21. Tao, X.; Zhang, D.; Ma, W.; Liu, X.; Xu, D. Automatic Metallic Surface Defect Detection and Recognition with Convolutional Neural Networks. Appl. Sci. 2018, 8, 1575. [CrossRef] 22. Lin, J.; Yao, Y.; Ma, L.; Wang, Y. Detection of a casting defect tracked by deep convolution neural network. Int. J. Adv. Manuf. Technol. 2018, 97, 573–581. [CrossRef] 23. Yu, W.; Yanjie, L. An intelligent machine vision system for detecting surface defects on packing boxes based on support vector machine. Meas. Control 2019, 52, 1102–1110. 24. Labudzki, R.; Legutko, S.; Raos, P. The essence and applications of machine vision. Teh. Tech. Gaz. 2019, 21, 903–909. 25. Kittler, J.; Illingworth, J. Minimum error thresholding. Pattern Recognit. 1986, 19, 41–47. [CrossRef] 26. Wong, A.K.; Sahoo, P.K. A gray-level threshold selection method based on maximum entropy principle. IEEE Trans. Syst. Man Cybern. 1989, 19, 866–871. [CrossRef] 27. Tsai, W.-H. Moment-preserving thresholding: A new approach. Comput. Vis. Graph. Image Process. 1985, 29, 377–393. [CrossRef] 28. Chen, P.C.; Pavlidis, T. Segmentation by texture using a co-occurrence matrix and a split-and-merge algorithm. Comput. Graph. Image Process. 1979, 10, 172–182. [CrossRef] 29. Hojjatoleslami, S.; Kittler, J. Region growing: A new approach. IEEE Trans. Image Process. 1998, 7, 1079–1084. [CrossRef] 30. Gao, Y.; Mas, J.F.; Kerle, N.; Navarrete Pacheco, J.A. Optimal region growing segmentation and ts effect on classification accuracy. Int. J. Remote Sens. 2011, 32, 3747–3763. [CrossRef] 31. Biswas, S.; Hazra, R. Active contours driven by modified LoG energy term and optimised penalty term for image segmentation. IET Image Process. 2020, 14, 3232–3242. [CrossRef] 32. Tarkhaneh, O.; Shen, H. An adaptive differential evolution algorithm to optimal multi-level thresholding for MRI brain image segmentation. Expert Syst. Appl. 2019, 138, 112820. [CrossRef] 33. Abd Elaziz, M.; Bhattacharyya, S.; Lu, S. Swarm selection method for multilevel thresholding image segmentation. Expert Syst. Appl. 2019, 138, 112818. [CrossRef] 34. Alroobaea, R.; Rubaiee, S.; Bourouis, S.; Bouguila, N.; Alsufyani, A. Bayesian inference framework for bounded generalized Gaussian-based mixture model and its application to biomedical images classification. Int. J. Imaging Syst. Technol. 2019, 30, 18–30. [CrossRef] 35. Narisetti, N.; Henke, M.; Seiler, C.; Shi, R.; Junker, A.; Altmann, T.; Gladilin, E. Semi-automated Root Image Analysis (saRIA). Sci. Rep. 2019, 9, 19674. [CrossRef][PubMed] Appl. Sci. 2021, 1, 1825 22 of 22

36. Zhang, Y.; Xu, C.; Li, C.; Yu, H.; Cao, J. Wood defect detection method with PCA feature fusion and compressed sensing. J. For. Res. 2015, 26, 745–751. [CrossRef] 37. Lu, M.; Mou, Y. Bearing Defect Classification Algorithm Based on Autoencoder Neural Network. Adv. Civ. Eng. 2020, 12. [CrossRef] 38. Yan, K.; Zhang, D. Feature selection and analysis on correlated gas sensor data with recursive feature elimination. Sens. Actuators B Chem. 2015, 212, 353–363. [CrossRef] 39. Lu, M.; Chen, L. Efficient Object Detection Algorithm in Kitchen Appliance Scene Images Based on Deep Learning. Math. Probl. Eng. 2020, 12.[CrossRef] 40. Spetale, F.E.; Bulacio, P.; Guillaume, S.; Murillo, J.; Tapia, E. A spectral envelope approach towards effective SVM-RFE on infrared data. Pattern Recognit. Lett. 2016, 71, 59–65. [CrossRef] 41. Ding, X.; Yang, Y.; Stein, E.A.; Ross, T.J. Multivariate classification of smokers and nonsmokers using SVM-RFE on structural MRI images. Hum. Brain Mapp. 2015, 36, 4869–4879. [CrossRef] 42. Zhao, J.; Jia, M. Feature extraction with global-locally preserving projections based on curvelet transform. In Electronics, Electrical Engineering, and Information Science, Proceedings of the 2015 International Conference on Electronics, Electrical Engineering, and Information Science (EEEIS2015), Guangzhou, China, 7–9 August 2015; World Scientific: Singapore, 2015. 43. Yildiz, K.; Buldu, A.; Demetgul, M. A thermal-based defect classification method in textile fabrics with K-nearest neighbor algorithm. J. Ind. Text. 2016, 45, 780–795. [CrossRef] 44. Dubey, S.R.; Dixit, P.; Singh, N.; Gupta, J.P. Infected fruit part detection using K-means clustering segmentation technique. IJIMAI 2013, 2, 65–72. [CrossRef] 45. Mu, W.; Gao, J.; Jiang, H.; Wang, Z.; Chen, F.; Dang, C. Automatic classification approach to weld defects based on PCA and SVM. Insight-Non-Destr. Test. Cond. Monit. 2013, 55, 535–539. [CrossRef] 46. Muhammad, G. Date fruits classification using texture descriptors and shape-size features. Eng. Appl. Artif. Intell. 2015, 37, 361–367. [CrossRef]