Reducing Workload by Defective Product Classification
Author Date Published Dr. Nguyen Ngoc Tam February 7th, 2020 Content Abstract 3 Introduction 4 Background 5 Solution 7 Results 10 Conclusion 11 How can FPT help? 13 About the author 13
Reducing Workload by Defective Product Classification 2 AbstractPlaying the roles of eyes and ears in the manufacturing process, Artificial Intelligence/Machine Learning (AI/ML) is contributing to addressing the growing challenges of product quality control. According to the corporate insurance carrier, AGCS, “defective products not only pose a serious safety risk to the public but can also cause significant financial and reputational damage to the companies concerned. Defective product incidents have caused insured losses in excess of $2B over the past five years, making them the largest generator of liability losses.” With over 50% of manufacturers planning to increase Artificial Intelligence/ Machine Learning spending in the coming years, according to Forbes Insights research, the industry is leaving behind its stagnant reputation to dive into automation.
Reducing Workload by Defective Product Classification 3 IntroductionA very common challenge of ML application in manufacturing is data acquisition and data processing. While the accuracy of data plays a strong role in the performance of ML algorithms, the challenge is to go through massive data sets and label each unit with a high level of accuracy. This paper sought to solve the problem of using AI/ML in the process of defective product classification by using Autoencoder model and One-class classification algorithm to classify normal and abnormal ones. The solution does not only help to extract important features, thereby precisely detecting abnormalities, but also reduces the amount of time spent on these tasks. Experiments on client’s data set showed a reduction of approximately 35% on user workload, allowing manufacturers to achieve higher productivity and lower burden on quality control workers.
Reducing Workload by Defective Product Classification 4 BackgroundProduct quality control plays an important role in maintaining and strengthening the link between manufacturers and final customers. Statistics show that 45% of companies using smart manufacturing technologies have experienced increased customer satisfaction. The visual quality impression of a product has a strong influence on the decision of whether a product is purchased or not. Traditionally, the verification process, when manually handled by human eyes, is time-consuming and takes Usually, a lot of effort. Today, the automated verification system using artificial intelligence is becoming more and more defects occur popular among smart technology manufacturers. very rarely, A common approach using artificial intelligence for the verification process is to use supervised learning so experts model. However, the challenge of this approach is need to spend in the training process of faulty sample collection. Usually, defects occur very rarely, so experts need extreme labor to spend extreme labor effort to label sample into a unit level. To overcome this challenge, this paper effort to label proposes an unsupervised learning approach for sample into a defect detection, which can save much more human labor effort during the training process. unit level. This unsupervised learning model uses Autoencoder network, which will reconstruct the normal input image based on a sample of normal data, and if it meets the abnormal data, it will not reconstruct defect areas. This process will leave images after the reconstruction process with strange bright spots on the CT image. By subtracting the corresponding query region, a pixel-wise anomaly score is obtained, which is then used to detect defects, therefore experts can save time going through all the normal sample. This approach however, still has some limitations:
In some cases, anomalies are small bright dots which hardly recognizable even by human eyes. There is also no apparent difference between the abnormal image and the normal image. Patterns of data set vary, which causes difficulty for the Autoencoder network to learn the main patterns. To overcome the above limitations, this paper proposes a combination model in which the outcome of Autoencoder network will be the input of One-class model to identify objects in a specific concept. This combination approach will be able to reduce the number of images that need to be inspected hence save more human workload and make it easier for experts to detect defect sample.
Reducing Workload by Defective Product Classification 5 This combination approach will be able to reduce the number of images that need to be inspected, and hence save more human workload as well as make it easier for experts to detect defect sample.
Reducing Workload by Defective Product Classification 6 Solution The solution is the combination of an Autoencoder network and a One-class model in training models. The idea behind this approach is like how we break an object into multiple pieces and then reconstruct the object in a predefined way, if the object is not the same then it is defected. Fig 1 shows the overall architecture of this model in the training and testing phases, which includes:
Training Phase This phase will help AI/ML to learn how to classify normal and abnormal data. Normal process in supervised model would require manual effort to label the data to help improve the model. With the unsupervised approach, U-net will improve the model overtime.
Testing Phase In this phase, the learned classifiers① model from training phase, which are the output of the One-class classification with Local outlier factor algorithm, will be used to discriminate between normal images and abnormal images. Overtime when the model is improved through the 2 phases, the accuracy output of image processing will also improve. In this paper, we run the testing phase on the real data set of 25,088 images from our client.
②Reducing Workload by Defective Product Classification 7
Loss SSIM
U-net Training AutoEncoder Encode Decode
Same batch Shared data weights
Training One Class Encode Encoder Model
1×256 Compactness Loss Hyper Sphere (boundary)