<<

sensors

Article Simultaneous Burr and Cut Interruption Detection during with Neural Networks

Benedikt Adelmann * and Ralf Hellmann

Applied Laser and Photonics Group, Faculty of Engineering, University of Applied Sciences Aschaffenburg, Wuerzburger Straße 45, 63739 Aschaffenburg, Germany; [email protected] * Correspondence: [email protected]; Tel.: +49-6022-81-3693

Abstract: In this contribution, we compare basic neural networks with convolutional neural networks for cut failure classification during fiber laser cutting. The experiments are performed by cutting thin electrical sheets with a 500 W single-mode fiber laser while taking coaxial camera images for the classification. The quality is grouped in the categories good cut, cuts with burr formation and cut interruptions. Indeed, our results reveal that both cut failures can be detected with one system. Independent of the neural network design and size, a minimum classification accuracy of 92.8% is achieved, which could be increased with more complex networks to 95.8%. Thus, convolutional neural networks reveal a slight performance advantage over basic neural networks, which yet is accompanied by a higher calculation time, which nevertheless is still below 2 ms. In a separated examination, cut interruptions can be detected with much higher accuracy as compared to burr formation. Overall, the results reveal the possibility to detect burr formations and cut interruptions during laser cutting simultaneously with high accuracy, as being desirable for industrial applications.   Keywords: laser cutting; quality monitoring; artificial neural network; burr formation; cut interruption; Citation: Adelmann, B.; Hellmann, fiber laser R. Simultaneous Burr and Cut Interruption Detection during Laser Cutting with Neural Networks. Sensors 2021, 21, 5831. https:// 1. Introduction doi.org/10.3390/s21175831 Laser cutting of thin metal sheets using fiber or disk lasers is now a customary process in the metal industry. The key advantages of laser cutting are high productivity and Academic Editor: Bernhard flexibility, good edge quality and the option for easy process automation. Especially Wilhelm Roth for highly automated unmanned machines, seamlessly combined in line with bending, separation or machines, a permanent high cut quality is essential to avoid material Received: 8 June 2021 waste, downtime or damaging subsequent machine steps in mechanized process chains. Accepted: 25 August 2021 Published: 30 August 2021 As a consequence, besides optimizing the cutting machine in order to reduce the influence of disturbance variables, cut quality monitoring is also of utmost interest.

Publisher’s Note: MDPI stays neutral The most common and disruptive quality defects are cut interruptions and burr with regard to jurisdictional claims in formation [1]. To obtain high-quality cuts, process parameters, such as laser power, feed published maps and institutional affil- rate, gas pressure, working distance of the nozzle and focus position, respectively, are to be iations. set appropriately. Imprecise process parameters and typical disturbance values like thermal lenses, unclean optics, damaged gas nozzles, gas pressure fluctuations and the variations of material properties may lead to cut poor-quality and, thus, nonconforming products. To ensure a high quality, an online quality monitoring system, which can detect multiple defects, would be the best choice in order to respond quickly and reduce downtime, Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. material waste or cost-extensive rework. Until now, most reviewed sensor systems for This article is an open access article monitoring laser cutting focus only on one single fault. distributed under the terms and For detecting burr formation during laser cutting, different approaches using cameras, conditions of the Creative Commons phododiodes or acoustic emission were investigated. In [2,3] burr formation, roughness Attribution (CC BY) license (https:// and striation angle during laser cutting with a 6 kW CO2 laser are determined by using creativecommons.org/licenses/by/ a NIR camera sampling with 40 Hz. By using two cameras in [4], laser cutting with 4.0/). a CO2 laser is monitored by observing the spark trajectories underneath the sheet and

Sensors 2021, 21, 5831. https://doi.org/10.3390/s21175831 https://www.mdpi.com/journal/sensors Sensors 2021, 21, 5831 2 of 16

melt bath geometries and correlate this to the burr formation or overburning defects. A novel approach is used in [5], employing a convolutional neural network to calculate burr formation from camera images with a high accuracy of 92%. By evaluating the thermal radiation of the process zone with photodiodes [6], the burr height during fiber laser cutting can be measured from the standard deviation of a filtered photodiode signal. Results by using photodiode-based sensors integrated in the cutting head [7] showed that the mean photodiode’s current increases with lower cut qualities, while similar experiments revealed increasing mean photodiode currents at lower cut surface roughness [8]. An acoustic approach was investigated by monitoring the acoustic emission during laser cutting and deducing burr formation by evaluating the acoustic bursts [9]. Also for cut interruption detection, most approaches are based on photodiode signals or camera images. Photodiode-based methods for cut interruption detection are signal threshold-based [10], done by the comparison of different photodiodes [11] or based on cross-correlations [12]. However, all those methods have the disadvantage of requiring thresholds that vary with the sheet thickness or laser parameters. In addition, an adap- tation to other materials or sheet thicknesses requires a large engineering effort to define respective threshold values by extensive investigations. To avoid this problem, [13] uses a convolutional neural network to calculate cut interruptions from camera images dur- ing fiber laser cutting of different sheet thicknesses with an accuracy of 99.9%. Another approach is performed by using a regression model based on polynomial logistics [14] to calculate the interruptions from laser machine parameters only. This literature review reveals that for both burr formation monitoring and cut inter- ruption, individual detection schemes have previously been reported, but a combined and simultaneous detection for both failure patterns has not been reported so far. In addition, many of the previous studies applied CO2 lasers, which are often replaced nowadays by modern fiber or disk lasers, for which, in turn, fewer reports are available. To detect both failures with the same system, we chose the evaluation of camera images with neural networks, as they are able to achieve a high accuracy in detecting both cut failures [5,13]. The use of neural networks, especially for convolutional neural networks (CNN), has been demonstrated for various image classification purposes, such as face recognition and object detection [15,16], in medicine for cancer detection [17] and electroencephalogram (EEG) evaluations [18] or in geology for earthquake detection [19]. For failure analyses in technical processes, neural networks have also been successfully used for, e.g., concrete crack detection [20], road crack detection [21] or detecting wafer error determinations [22]. In addition, detecting different failure types with the same system has been successfully proven with neural networks, such as detecting various wood veneer surface defects [23] or different welding defects [24] during laser welding. The objective of this publication is to detect both burr formation and cut interruptions during single-mode laser cutting of electrical sheets from camera images with neural networks. The advantages of our system are, firstly, easy adaption to industrial cutting heads, which often already have a camera interface. Secondly, images are taken coaxially to the laser beam and are therefore independent of the laser cut direction. Thirdly, due to the use of a learning system the engineering effort is low when the system has to be adapted to other materials or sheet thicknesses. Two different neural network types are used, namely a basic neural network and a convolutional neural network. The basic neural network is faster and can detect bright or dark zones but is less able to extract abstractions of 2D features and needs a lot of parameters when the networks get more complex. On the other hand, convolutional neural networks are much better in learning and extracting abstractions of 2D features and usually need fewer parameters. However, they require a higher calculation effort due to many multiplications in the convolution layers [25,26]. The cutting of electrical sheets is chosen because it is an established process in the production and prototyping of electric motors and transformers [27–30], i.e., it is a relevant and contributing process step to foster e-mobility. In order to reduce the electrical losses caused by eddy currents, the rotor is assembled of a stack of thin electrical sheets with Sensors 2021, 21, 5831 3 of 16

electrical isolation layers in between the sheets. The sheet thickness varies typically between 0.35 mm to 0.5 mm, with the eddy currents being lower for thinner sheets. As a result, for an electric motor, a large number of sheets with high quality requirements are necessary. Especially burr formations result in gaps between sheets or the burr can pierce the electrical isolation layer and connect the sheets electrically which both reduce the performance of motors drastically. Therefore, quality monitoring during laser cutting is of great interest for industrial applications.

2. Experimental 2.1. Laser System and Cutting Setup In this study, a continuous wave 500 W single-mode fiber laser (IPG Photonics, Bur- bach, Germany) is used to perform the experiments. The laser system is equipped with linear stages (X, Y) for positioning the workpiece (Aerotech, Pittsburgh, PA, USA) and a fine cutting head (Precitec, Gaggenau, Germany) is attached to a third linear drive (Z). The assisted gas nitrogen with purity greater than 99.999% flows coaxially to the laser beam. The gas nozzle has a diameter of 0.8 mm and its distance to the workpiece is positioned by a capacitive closed loop control of the z-linear drive. The emitting wavelength of the laser is specified to be 1070 nm in conjunction with a beam propagation factor of M2 < 1:1. The raw beam diameter of 7.25 mm is focused by a lens with a focal length of 50 mm. The according Rayleigh length is calculated to 70 µm and the focus diameter to 10 µm, respectively. The design of the cutting head with the high-speed camera and a photo of the laser system are illustrated in Figure1. The dashed lines depict the primary laser radiation from the fiber laser, which is collimated by a collimator and reflected by a dichroic mirror downwards to the processing zone. There the laser radiation is focused by the processing lens through the protective glass onto the work piece, which is placed on the XY stages. The process radiation from the sheet radiates omnidirectional (dash-doted line), thus partly through the nozzle and protective glass and is collimated by the processing lens upwards. The process radiation passes the dichroic mirror and is focused by a lens onto the high- Sensors 2021, 21, x FORspeed PEER REVIEW camera. The focus of the camera is set to the bottom side of the sheet in order to 4 of 15

have a sharp view of possible burr formations.

FigureFigure 1. Optical 1. Optical setup setup of the of cutting the cutting head (headleft) and(left) image and image of the of system the system (right (right).).

2.2. Laser Cutting The laser cuts are performed in electrical sheets of the type M270 (according to EN 10106 this denotes a loss of 2.7 W/kg during reversal of magnetism at 50 Hz and 1.5 T) with a sheet thickness of 0.35 mm. This sheet thickness is chosen because it fits well to the laser focus properties e.g., Rayleigh length and it is one of the most often used sheet thick- nesses for electrical motors and transformers, because it provides a good compromise be- tween low eddy currents and high productivity. Stacks of thicker sheets are faster to pro- duce because less sheets are required per stack but with increasing sheet thickness also unwanted eddy currents increase. Thinner sheet thicknesses require a higher production effort per stack and are more difficult to cut because they are very flexible, and warp under the gas pressure and thermal influence. In these experiments only one sheet thickness is used, but please note that in previous publications with similar systems an adaptation of the results to other sheet thicknesses was possible with only minor additional expenses [5,13]. As ad-hoc pre-experiments reveal, the parameter combination of a good quality cut is a laser power of 500 W, a feed rate of 400 mm/s and a laser focus position on the bottom side of the metal sheet. The gas nozzle has a diameter of 0.8 mm and is paced 0.5 mm above the sheet surface and the gas pressure is 7 bar. For the experimental design, the parameters are varied to intentionally enforce cut failures. Burr formations are caused by less gas flow into the cut kerf due to higher nozzle to sheet distance, lower gas pressure, an overvalued power to feed rate ratio or damaged nozzles. Cut interruptions are en- forced by too high feed rates or too low laser power. In the experimental design, 39 cuts with different laser parameters are performed for training the neural network and 22 cuts are performed for testing, with the cuts being evenly distributed to the three cut categories (good cut, cuts with burr formation and cut interruptions). A table of all cuts with laser machine parameters, category and use can be found in the Appendix A. The cuts are designed from a straight line including acceleration and deceleration paths of the linear stages. Exemplifying images of the sheets from all three cut categories taken by optical microscope after the cutting process are shown in Figure 2. Firstly, for a good quality cut, both top and bottom side of the cut kerf are char- acterized by clear edges without damages. Secondly, for a cut with burr, the top side is similar to the good quality cut, however on the bottom side drops of burr formation are clearly visible. Thirdly, the images of the cut interruption reveal a molten line on the sheet

Sensors 2021, 21, 5831 4 of 16

2.2. Laser Cutting The laser cuts are performed in electrical sheets of the type M270 (according to EN 10106 this denotes a loss of 2.7 W/kg during reversal of magnetism at 50 Hz and 1.5 T) with a sheet thickness of 0.35 mm. This sheet thickness is chosen because it fits well to the laser focus properties e.g., Rayleigh length and it is one of the most often used sheet thicknesses for electrical motors and transformers, because it provides a good compromise between low eddy currents and high productivity. Stacks of thicker sheets are faster to produce because less sheets are required per stack but with increasing sheet thickness also unwanted eddy currents increase. Thinner sheet thicknesses require a higher production effort per stack and are more difficult to cut because they are very flexible, and warp under the gas pressure and thermal influence. In these experiments only one sheet thickness is used, but please note that in previous publications with similar systems an adaptation of the results to other sheet thicknesses was possible with only minor additional expenses [5,13]. As ad-hoc pre-experiments reveal, the parameter combination of a good quality cut is a laser power of 500 W, a feed rate of 400 mm/s and a laser focus position on the bottom side of the metal sheet. The gas nozzle has a diameter of 0.8 mm and is paced 0.5 mm above the sheet surface and the gas pressure is 7 bar. For the experimental design, the parameters are varied to intentionally enforce cut failures. Burr formations are caused by less gas flow into the cut kerf due to higher nozzle to sheet distance, lower gas pressure, an overvalued power to feed rate ratio or damaged nozzles. Cut interruptions are enforced by too high feed rates or too low laser power. In the experimental design, 39 cuts with different laser parameters are performed for training the neural network and 22 cuts are performed for testing, with the cuts being evenly distributed to the three cut categories (good cut, cuts with burr formation and cut interruptions). A table of all cuts with laser machine parameters, category and use can be found in the AppendixA. The cuts are designed from a straight line including acceleration and deceleration paths of the linear stages. Exemplifying images of the sheets from all three cut categories taken by optical microscope after the cutting process are shown in Figure2. Firstly, for a good quality cut, both top and bottom side of the cut kerf are characterized by clear edges without damages. Secondly, for a cut with burr, the top side is similar to the good quality cut, however on the bottom side drops of burr formation are clearly visible. Thirdly, the images of the cut interruption reveal a molten line on the sheet top side and only a slightly discolored stripe on the bottom side with both sides of the sheet not being separated. Sensors 2021, 21, x FOR PEER REVIEW 5 of 15

Sensors 2021, 21, 5831 5 of 16 top side and only a slightly discolored stripe on the bottom side with both sides of the sheet not being separated.

Top

Good quality cut Bottom

Top

Cut with burr

Bottom

Top

Cut interrup- tion Bottom

FigureFigure 2. 2.Images Images of of the the top top and and bottom bottom side side of laserof laser cuts cuts with with and an withoutd without cut cut errors errors taken taken with with an opticalan optical microscope microscope after after laser cutting. laser cutting.

2.3.2.3. CameraCamera andand ImageImage AcquisitionAcquisition For image acquisition during laser cutting, we used a high-speed camera (Fastcam For image acquisition during laser cutting, we used a high-speed camera (Fastcam AX50, Photron, Tokyo, Japan) with a maximum frame rate of 170,000 frames per second. AX50, Photron, Tokyo, Japan) with a maximum frame rate of 170,000 frames per second. The maximum resolution is 1024 × 1024 pixels, with a pixel size of 20 × 20 µm2 in The maximum resolution is 1024 × 1024 pixels, with a square pixel size of 20 × 20 µm2 combination with a Bayer CFA Color Matrix. For process image acquisition, videos of the in combination with a Bayer CFA Color Matrix. For process image acquisition, videos laser cutting process are grabbed with a frame rate of 10 kilo frames per second with an of the laser cutting process are grabbed with a frame rate of 10 kilo frames per second exposure time of 2 µs and a resolution of 128 × 64 pixels. Even at this high frame rate, no with an exposure time of 2 µs and a resolution of 128 × 64 pixels. Even at this high oversampling occurs and consecutive images are not similar, because the relevant under- frame rate, no oversampling occurs and consecutive images are not similar, because the relevantlying melt underlying flow dynamics melt flow are characterized dynamics are by characterized high melt flow by velocities high melt in flow the velocitiesrange of 10 in m/s [31] and vary therefore at estimated frequencies between 100 kHz and 300 kHz [32]. the range of 10 m/s [31] and vary therefore at estimated frequencies between 100 kHz and 300Please kHz note, [32]. due Please to the note, lack due of external to the lack illumi of externalnation in illumination the cutting head, in the the cutting brightness head, thein the images are caused by the thermal radiation of the process zone. brightness in the images are caused by the thermal radiation of the process zone. TwoTwo exemplifyingexemplifying imagesimages of each cut cate categorygory are are shown shown in in Figure Figure 3 with the cut directiondirection alwaysalways upwards. The The orientation orientation of ofthe the images images is always is always the thesame same because because the thestraight straight lines lines are cut are in cut the in sa theme same direction. direction. For complex For complex cuts, images cuts, images with the with same the orien- same orientationtation can be can transformed be transformed from various from various oriented oriented images images by rotation by rotation based on based the move- on the movementment direction direction of the ofdrives. the drives. In these In images, these images,brightness brightness is caused isby caused the thermal by the radiation thermal radiationof the hot of melt. the hotGood melt. cuts Good are characterized cuts are characterized by a bright by circle a bright at the circle position at the on position the laser on thefocus, laser and focus, below and this, below two this,tapered two stripes tapered indicating stripes indicating the flowing the melt flowing at the melt side at walls the side of wallsthe cut of kerf, the cut because kerf, becausein the middl in thee the middle melt thebath melt is blown bath isout blown first. outThe first.cuts with The cutsburrwith are burrsimilar are to similar the good to the quality good cuts quality but tapered cuts but stripes tapered are stripes formed are differently. formed differently. The cut inter- The cutruptions interruptions are very are different very different to the other to the catego otherries categories and are andcharacterized are characterized by larger by bright larger brightareas and areas a andmore a elliptical more elliptical shape shapewith no with tapered no tapered stripes. stripes.

Sensors 2021,, 21,, 5831x FOR PEER REVIEW 66 of of 1615

Good cut Cut with burr Cut interruption

Figure 3. Examples of camera images of the th threeree cut categories taken during laser cutting with the high speed camera.

From the 39 laser cuts, the experimental de designsign delivers the same number of training videos with overalloverall 5252 thousand thousand training training images, images, while while from from the the 22 testing22 testing cuts cuts 34 thousand 34 thou- sandtest images test images are provided. are provided. It is worthIt is worth to mention, to mention, that that the the size size of severalof several ten ten thousands thousands of oftraining training images images is typicalis typical for for training training neural neur networksal networks [33 [33].]. For For both both training training and and testing, test- ing,the imagesthe images are almostare almost evenly evenly distributed distribute ond the on threethe three categories categories with with cut interruptionscut interrup- tionsbeing being slightly slightly underrepresented. underrepresented. The The reason reason for for this this underrepresentation underrepresentation is, is, thatthat cut interruptionsinterruptions only occur at high feed rates, i.e., images images from from acceleration and deceleration paths can be used only partially and, in turn,turn, less images per video can be captured.

2.4. Computer Computer Hardware Hardware and Neural Network Design For learning and evaluating the the neural neural network, network, a a computer computer with with an an Intel Intel Core Core 7- 7- 8700 processor with a 3.2-GHz clock rate in combination with 16-GB DDR4 RAM was used. All calculations are performed with the CPU rather than the GPU to show that the used. All calculations are performed with the CPU rather than the GPU to show that the machine learning steps are also possible to run on standard computers, which are usually machine learning steps are also possible to run on standard computers, which are usually integrated with laser cutting machines. The used software was TensorFlow version 2.0.0 integrated with laser cutting machines. The used software was TensorFlow version 2.0.0 in combination with Keras version 2.2.4 (Software available from: https://tensorflow.org in combination with Keras version 2.2.4 (Software available from: https://tensorflow.org (accessed on 24 March 2021)). (accessed on 24 March 2021)). In most publications about image classification with neural networks, the images have In most publications about image classification with neural networks, the images major differences. In contrast, in the images captured in our experiments, the object to have major differences. In contrast, in the images captured in our experiments, the object analyze always has the same size, orientation and illumination conditions which should to analyze always has the same size, orientation and illumination conditions which should simplify the classification when compared to classifying common, moving items like vehi- simplify the classification when compared to classifying common, moving items like ve- cles or animals [34,35]. Furthermore, our images have a rectangular shape with 128 × 64 hicles or animals [34,35]. Furthermore, our images have a rectangular shape with 128 × 64 pixels, while most classification algorithms are optimized on square images sizes having pixels,mostly while a resolution most classification of 224 × 224 algorithms pixels like are MobileNet, optimized SqueezeNet on square images or AlexNet sizes having[36,37]. mostlyBecause a anresolution enlargement of 224 of the× 224 image pixels size like slows MobileNet, the system SqueezeNet drastically, or two AlexNet self-designed [36,37]. Becauseand completely an enlargement different of neural the image network size slow are useds the system with many drastically, elements two being self-designed adapted andto other, completely often used different neural neural networks. network The are first used network, with many as shown elements in Figure being4 adapted is a basic to other,network often without used convolutionneural networ andks. onlyThe consistsfirst network, of image as show flatteningn in Figure followed 4 is bya basic two fullynet- workconnected without layers convolution with N nodes and only and ReLUconsists Activation. of image Toflattening classify followed the three by different two fully cut connectedcategories, layers a fully with connected N nodes layer and withReLU 3 Acti nodesvation. and softmaxTo classify activation the three completes different thecut categories,network. The a fully second connected network layer is awith convolutional 3 nodes and neural softmax network activation with completes four convolution the net- work.blocks The followed second by network the same is a threeconvolutional fully connected neural network layers as with in the four basic convolution network. blocks Each followedblock consists by the of same a convolution three fully layer connected with a la kernelyers as size in the of 3basic× 3 andnetwork. M filters, Each whichblock consiststhe output of a of convolution the convolution layer iswith added a kernel with size input of of3 × the 3 and block. M filters, Such bypasses which the are output most ofcommon the convolution in, e.g., MobileNet is added [with36]. Toinput reduce of the the block. number Such of parameters,bypasses are a maxmost pooling common layer in, e.g.,with MobileNet a common [36]. pool sizeTo reduce of 2 × the2 is number used [26 ].of Inparameters, contrast to a often max neural pooling networks layer with used a commonin the literature, pool size we of use 2 × a 2 constant is used [26]. instead In co ofntrast an increasing to often filterneural number networks for subsequentused in the literature,convolution we layers use a constant and weuse instead normal of an convolutions increasing filter rather number than separable for subsequent or pointwise convo- lutionconvolutions. layers and Because we use every norm blockal convolutions halves the imagerather than size inseparable 2 dimensions, or pointwise after 4 convo- blocks lutions.the image Because size is every 8 × 4 block× M. Thehalves fully the connected image size layers in 2 dimensions, after the flattening after 4 haveblocks the the same im- agenumber size is of 8 nodes × 4 × M. as The the numberfully connected of parameters layers deliveredafter the flattening by the flattened have the layer. same The number used ofmodel nodes optimizer as the number is Adam, of which parameters according delivered to [38], by together the flattened with SDG layer. (Stochastic The used Gradient model optimizerDescent) providesis Adam, superior which according optimization to [38], results. together Furthermore, with SDG we(Stochastic use the Gradient loss function De-

SensorsSensors 20212021,, 2121,, xx FORFOR PEERPEER REVIEWREVIEW 77 ofof 1515 Sensors 2021, 21, 5831 7 of 16

scent)scent) providesprovides superiorsuperior optimizationoptimization results.results. Furthermore,Furthermore, wewe useuse thethe lossloss functionfunction “cat-“cat- egorical“categoricalegorical crossentropy”crossentropy” crossentropy” toto enableenable to enable categoricalcategorical categorical outputsoutputs outputs (one(one (one hothot hot encoding),encoding), encoding), andand and thethe the metricsmetrics “accuracy”.“accuracy”.

FigureFigure 4.4. DesignDesign ofof thethe twotwo neuralneural networks.networks.

2.5.2.5.2.5. MethodologyMethodology Methodology TheTheThe methodologymethodology ofof ourour experimentsexperiments isis shown shown in in the the workflow workflowworkflow diagram diagram in in FigureFigure 55.. . InInIn aa a firstfirst first step,step, step, thethe the laserlaser laser cutscuts cuts areare are performedperformed performed andand and duringduring during cuttingcutting cutting videosvideos videos areare taken aretaken taken fromfrom from thethe processprocessthe process zonezone zone withwith withthethe highhigh the speed highspeed speed camera,camera, camera, somesome of someof thesethese of imagesimages these images havehave beenbeen have shownshown been in shownin Fig-Fig- ureurein Figure 3.3. AfterAfter3 . cutting,cutting, After cutting, thethe cutcut the kerfskerfs cut areare kerfs analyzedanalyzed are analyzed byby withwith by anan with opticaloptical an microscopemicroscope optical microscope andand catego-catego- and rizedrizedcategorized manuallymanually manually whetherwhether whether aa goodgood cut, acut, good burrburr cut, formationformation burr formation oror aa cucutt or interruptioninterruption a cut interruption occurredoccurred occurred (exam-(exam- plesples(examples ofof thesethese of imagesimages these shown imagesshown inin shown FigureFigure in 2).2). Figure BaseBasedd2 ).onon Based thisthis classification,classification, on this classification, thethe videosvideos the takentaken videos dur-dur- ingingtaken laserlaser during cuttingcutting laser areare cutting labeledlabeled are withwith labeled thethe corresponcorrespon with the correspondingdingding class.class. InIn casecase class. thethe In cutcut case qualityquality the cut changeschanges quality withinwithinchanges oneone within cut,cut, onethethe video cut,video the isis video divided,divided, is divided, soso thethe quality soquality the quality isis constantconstant is constant withinwithin within aa video.video. a video. ThenThen Then thethe videosvideosthe videos areare separatedseparated are separated inin trainingtraining in training videosvideos videos andand te andtestst videos, testvideos, videos, soso thethe so imagesimages the images forfor testingtesting for testing areare notnot are fromfromnot from videosvideos videos usedusedused forfor training.training. for training. FromFrom From thethe trainitraini the trainingngng videos,videos, videos, thethe singlesingle the single frameframe frame isis extractedextracted is extracted andand withwithand withthesethese these imagesimages images thethe neuralneural the neural networknetwork network isis trained.trained. is trained. Furthermore,Furthermore, Furthermore, thethe singlesingle the single framesframes frames areare ex- areex- tractedtractedextracted fromfrom from thethe the testtest test videos,videos, videos, andand and thethe the resultingresulting resulting imagesimages images areare are usedused used toto to testtest test thethe the trainedtrained trained neuralneural network.network.network.

TrainingTraining neuralneural TrainingTraining videosvideos TrainingTraining imagesimages networknetwork LaserLaser cuttingcutting & & SheetSheet analysisanalysis videovideo aquisition aquisition TestingTesting neural neural TestTest videosvideos TestTest imagesimages networknetwork

FigureFigure 5.5. WorkflowWorkflowWorkflow diagram. diagram.

Sensors 2021, 21Sensors, 5831 2021, 21, x FOR PEER REVIEW 8 of 16 8 of 15

3. Results 3. Results 3.1. Training3.1. Behaviour Training Behaviour The differentThe neural different networks neural are trainednetworks on are the trainingtrained dataseton the training and the performancedataset and the perfor- is calculatedmance on the is test calculated dataset. on Exemplarily, the test dataset. the training Exemplarily, behavior the training of the convolutional behavior of the convolu- neural networktional with neural 16 filters network in each with convolution 16 filters in each is shown convolution in Figure is 6shown. Apparently, in Figure the 6. Apparently, training accuracythe training rises continuously accuracy rises with continuously the training epochs,with the reaching training 99% epochs, after 10reaching epochs 99% after 10 and 99.5% afterepochs 20 epochs,and 99.5% respectively. after 20 Onepochs, the other respective hand,ly. the On test the accuracy other reacheshand, the 94% test accuracy after three epochsreaches and 94% fluctuates after three with epochs further and training fluctuates around with this level,further which training is a typicalaround this level, behavior forwhich neural is network a typical training behavior [39 for]. Evenneural further networ training,k training above [39]. 20 Even epochs, further results training, above only in a fluctuation20 epochs, of results the accuracy only in a rather fluctuation than aof continuous the accuracy increase. rather than To reduce a continuous the increase. deviation ofTo the reduce test results the deviation for comparisons of the test between results different for comparison networks,s between the mean different of the networks, test results betweenthe mean 10 of and the 20test epochs results is between used. 10 and 20 epochs is used.

1

0.98

0.96

0.94 Accuracy 0.92 Training 0.9 Test

0.88 0 5 10 15 20 25 Training Epochs

FigureFigure 6. 6. TrainingTraining accuracy accuracy and and test test accuracy accuracy of of a a convolutional convolutional neural neural network network with with 16 16 filters. filters.

3.2. Basic Neural3.2. Basic Network Neural Network To determineTo the determine performance the ofperformance the basic neural of the networks, basic neural those networks, with node those numbers with node num- N between 5bers and N 1000 between are trained 5 and 1000 on the are training trained dataset on the andtraining tested dataset on the and test tested dataset. on the test da- The mean testtaset. accuracy The mean between test accuracy 10 and 20 between training 10 epochs and 20 and training the required epochs and calculation the required calcu- time per imagelation are time shown per image in Figure are 7shown. It is in obvious Figure that7. It is the obvious accuracy that for the a accuracy very small for a very small network withnetwork only five with nodes only is quitefive nodes high, beingis quite 92.8%, high, and being the 92.8%, calculation and the time calculation of 0.1 ms time of 0.1 per image beingms per very image fast. being With anvery increasing fast. With number an increasing of nodes, number the accuracy of nodes, increases the accuracy to increases a maximumto of a95.2% maximum at 1000 of 95.2% nodes, at which 1000 nodes, is accompanied which is accompanied by a higher calculation by a higher time calculation time of 0.32 ms.of Parallel 0.32 ms. to theParallel calculation to the calculation time, also time, the trainable also the trainable parameters parameters increase increase with with the the numbernumber of nodes of starting nodes fromstarting 122 from thousand 122 thousand parameters parameters for five nodes for five and nodes reaching and reaching 25 25 million parametersmillion parameters at 1000 nodes. at 1000 A further nodes. increaseA further of in thecrease parameters of the parameters is not considered is not considered to be useful,to because be useful, the trainingbecause datasetthe traini consistng dataset of 420 consist million of of 420 pixels million (number of pixels of images(number of images x pixels per image),x pixels soper the image), neural so network the neural tend network to over tend fit the totraining over fit datasetthe training rather dataset than rather than developingdeveloping generalized generalized features. Generally, features. with Generally, the basic with neural the basic network neural accuracies network of accuracies of 94% (mean)94% are achievable. (mean) are achievable.

Sensors 2021, 21, x FOR PEER REVIEW 9 of 15 Sensors 2021, 21, x FOR PEER REVIEW 9 of 15 Sensors 2021, 21, 5831 9 of 16

0.96 2 0.96 2 Accuracy Accuracy 0.953 0.953 1.6 0.95 1.6 0.95 0.947 0.947 1.2 1.20.942 0.94 0.942 0.941 0.940 0.94 0.941 0.940 Accuracy 0.8

Accuracy 0.9350.8 0.935 0.93 0.93 0.928 0.4 0.928 0.4 Calculation time / ms per image Calculation Calculation time / ms per image Calculation 0.92 0 0.92 1 10 1000 1000 1 10 100 1000 Number of nodes Number of nodes

Figure 7. AccuracyFigure of 7. the Accuracy basic neural of the network basic neural as a functionnetwork ofas nodesa function per fullyof nodes connected per fully layer. connected layer. Figure 7. Accuracy of the basic neural network as a function of nodes per fully connected layer. 3.3. Convolutional3.3. Convolutional Neural Network Neural Network 3.3. Convolutional Neural Network Under the sameUnder conditions the same conditions as the basic as neuralthe basic network, neural network, the convolutional the convolutional neural neural net- Undernetwork the issame alsowork conditions trained is also andtrained as tested.the basicand Thetested. neur resultsal The network, ofresults the the accuracy of convolutionalthe accuracy and calculation and neural calculation net- time for time for filter work filteris also numbers trainednumbers and between tested. between 4 The and 4results 64 and are 64 of depicted are the depicted accuracy in Figure in and Figu8 .calculationre The 8. accuracyThe accuracytime offor the filterof neuralthe neural network numbersnetwork between is quiteis 4 quiteand high 64 high forare for alldepicted filterall filter numbers in numbers Figure and 8. and The fluctuates fluc accuracytuates between ofbetween the 94.6% neural 94.6% and network and 95.8% 95.8% with with no clear is quiteno high clear for trend. alltrend. filter In Innumbers addition, addition, and the the fluc accuracy accuracytuates alsobetween also varies varies 94.6% for for theand the same95.8% same network withnetwork no whenclear when it it is is calculated trend.calculated In addition, severalseveral the accuracy times. times. However, However, also varies the the calculationfor calculation the same time timenetwork increases increases when clearly clearly it is with calculated with the the number number of filters severalof times. filters fromHowever,from 0.36 0.36 ms the perms calculation per image image to 1.77time to 1.77 ms.increases ms. The The number clearly number of with trainable of trainablethe number parameters parameters of filters start start with with 34 thou- from 0.3634 thousand ms per imagesand forfour forto 1.77four filters ms. filters and The and increasesnumber increases of to trainable 8.4 to Million8.4 Million parameters for thefor the 64 start filters64 withfilters (details 34 (details thou- how how to to calculate sand forcalculate four filters thethe number and number increases ofparameters of parameters to 8.4 Million are describedare fordescribed the in64 [ 25filters in]). [2 For5]). (details theFor mean,the how mean, theto calculate convolutional the convolutional neural neural network is able to classify about 95% of the image correctly. the number of parametersnetwork isare able described to classify in [2about5]). For 95% the of mean,the image the convolutionalcorrectly. neural network is able to classify about 95% of the image correctly. 0.96 2 0.96 0.9592 0.959 Accuracy 0.959 Accuracy 0.959 Calculation time 1.6 Calculation time 0.952 0.95 0.952 1.6 0.95 0.947 0.947 0.946 0.946 1.2 1.2 0.94 0.94

Accuracy 0.8

Accuracy 0.8 0.93 0.93 0.4 0.4 Calculation time / ms per image Calculation Calculation time / ms per image Calculation 0.92 0 0.92 2481632640 248163264 Number of Filters Number of Filters

FigureFigure 8. Test 8. Test accuracy accuracy for for the the convolutional convolutional neural neural netw networkork as asa function a function of ofthe the number number of offilters. filters. Figure 8. Test accuracy for the convolutional neural network as a function of the number of filters.

Sensors 2021, 21, x FOR PEER REVIEW 10 of 15 Sensors 2021, 21, 5831 10 of 16

3.4. Comparison between Cut Failures 3.4. Comparison between Cut Failures Since the literature is available for both burr detection and cut interruptions during laserSince cutting, the literaturewhich vary is strongly available in for accuracy both burr, the detection performance and cutof our interruptions neural networks during in laserdetecting cutting, one which cut failure vary stronglyis determined. in accuracy, Therefore, the performance the accuracy of in our classifying neural networks good cuts inand detecting cuts with one burr cut as failure well as is good determined. cuts and Therefore,cut interruptions the accuracy is calculated in classifying separately. good For cutsthis and investigation, cuts with burr the asconvolutional well as good neural cuts and network cut interruptions with 16 filters is calculated is chosen, separately. because it Forprovides this investigation, high accuracy the and convolutional a comparable neural moderate network calculation with 16 filters time. is The chosen, results because of the itseparated provides classification high accuracy are and shown a comparable in Figure 9. moderate It is obvious calculation that the time.detection The of results cut inter- of theruptions separated is very classification reliable with are the shown accuracy in Figure being9 99.5%,. It is obvious as being that compared the detection to 93.1% of when cut interruptions is very reliable with the accuracy being 99.5%, as being compared to 93.1% detecting burr formation. The reason for this can also be seen in Figure 3, where good cuts when detecting burr formation. The reason for this can also be seen in Figure3, where good are much more similar to cuts with burr, while cut interruptions look very different to cuts are much more similar to cuts with burr, while cut interruptions look very different to both of the other failure classes. Both values individually agree with the literature values, both of the other failure classes. Both values individually agree with the literature values, which are 99.9% for the cut interruptions [13] and 92% for the burr detection [5], yet for which are 99.9% for the cut interruptions [13] and 92% for the burr detection [5], yet for burr burr detection in the literature a more complex burr definition is chosen. This shows that detection in the literature a more complex burr definition is chosen. This shows that cut cut interruptions are much easier to detect from camera images compared to burr for- interruptions are much easier to detect from camera images compared to burr formations. mations.

1

0.98 0.995 0.931 0.96

0.94

0.92 Accuracy

0.9

0.88

0.86 Interruptions Burr

FigureFigure 9. 9.Comparison Comparison of of the the test test accuracy accuracy between between interruptions interruptions and and burr burr formations. formations.

3.5.3.5. Error Error Analysis Analysis ForFor the the errorerror analysis,analysis, single misclassified misclassified images images and and the the distribution distribution of of misclassifi- misclas- sificationscations are are analyzed. analyzed. For For the the temporal temporal distri distribution,bution, a video a video of ofa cu at cut with with different different cut cutqualities qualities is produced. is produced. The The meas measuredured quality quality obtained obtained by bythe the optical optical microscope microscope and and the theprediction prediction of the of the convolutional convolutional neural neural network network with with 16 filters 16 filters is shown is shown in Figure in Figure 10. Mis-10. Misclassificationsclassifications are are indicated indicated by by red red dots dots that that are are not not placed placed on the blueblue lineline andand it it can can clearlyclearly be be seen, seen, which which misclassifications misclassifications occur occur more more often often than than others. others. The The most most frequent frequent misclassificationsmisclassifications are are cuts cuts predicted predicted as as burr. burr. Interruptions Interruptions are are rarely rarely misclassified misclassified and and otherother images images are are seldom seldom misclassified misclassified as as interruptions, interruptions, which which accompanies accompanies the the results results in in SectionSection 3.4 3.4.. The The distribution distribution of of the the misclassifications misclassifications reveals reveals no no concentration concentration on on a a specific specific sectorsector but but minor minor accumulations accumulations of of several several misclassifications misclassifications are are observed. observed. In In addition, addition, somesome areas areas without without any any misclassification misclassification or or only only single single misclassifications misclassifications can can be be found. found. TheseThese results results reveal reveal that that misclassifications misclassifications do do not not occur occur all all at at once once or or at at a a special special event event but but are widely distributed. are widely distributed.

Sensors 2021, 21, x FOR PEER REVIEW 11 of 15 Sensors 2021, 21, 5831 11 of 16 Sensors 2021, 21, x FOR PEER REVIEW 11 of 15

Figure 10. Measured image class and prediction by the neural network. Figure 10. Measured image class and predic predictiontion by the neural network. To analyze the misclassified images, two exemplified images from a good cut classi- fied as Tocut analyze analyzewith burr thethe are misclassifiedmisclassified shown in images,Figure images, 11. two two In exemplified contrastexemplified to imagesthe images images from from in a goodFigure a good cut 3, cut classifiedwhere classi- thefiedas bright cut as withcut area with burr is followedburr are shownare shownby intwo Figure in tapered Figure 11. Inst11.ripes, contrast In contrast in Figure to the to imagesthe11, theseimages in stripes Figure in Figure 3are, where hardly3, where the observed.thebright bright area However, area is followed is followed these by following two by tapered two stripestapered stripes, ar ste inripes, important Figure in 11Figure, thesefor the 11, stripes classification,these are stripes hardly arebecause observed. hardly in observed.thisHowever, area the However, these burr following is generated. these stripesfollowing Therefore, are stripes important in the ar ecase forimportant theof missing classification, for stripes,the classification, because the classification in thisbecause area betweeninthe this burr areacuts is generated.thewith burr and is without generated. Therefore, burr Therefore, in is the diffic caseult in of andthe missing casethus of characterized stripes, missing the stripes, classification by manythe classification misclas- between sifications.betweencuts with cuts and with without and burrwithout is difficult burr is and diffic thusult characterizedand thus characterized by many misclassifications.by many misclas- sifications.

FigureFigure 11. 11.TwoTwo examples examples of ofcuts cuts missclassified missclassified as asburr. burr. Figure 11. Two examples of cuts missclassified as burr. 4. Discussion 4. Discussion With the classification in three different categories during laser cutting, good cuts 4. DiscussionWith the classification in three different categories during laser cutting, good cuts can can be distinguished from cuts with burr and cut interruptions. The convolutional neural be distinguishedWith the classification from cuts with in threeburr anddifferent cut interruptions. categories during The convolutional laser cutting, goodneural cuts net- can network has, depending on the number of filters, a better classification accuracy by about be distinguished from cuts with burr and cut interruptions. The convolutional neural net- work1% has, when depending compared on to the the number basic networks. of filters, a The better maximum classification accuracy accuracy for the by basic about neural 1% work has, depending on the number of filters, a better classification accuracy by about 1% whennetworks compared (1000 to nodes)the basic is alsonetworks. lower, The being maximum 95.2% as accuracy compared for tothe a basic 95.8% neural accuracy net- of when compared to the basic networks. The maximum accuracy for the basic neural net- worksthe convolutional(1000 nodes) is neural also lower, network being with 95.2% 16 filters. as compared Nevertheless, to a 95.8% the difference accuracy betweenof the convolutionalworksboth neural (1000 neural networknodes) network is types also is lower,with small, 16 being whichfilters. 95.2% canNevertheless, beas explainedcompared the by todifference thea 95.8% objects betweenaccuracy in the imagesboth of the neuralconvolutionalalways network having types neural the sameis small,network size, which orientation with can 16 befilters. andexplained brightness,Nevertheless, by the which objects the isdifference notin the usually images between the always case both for havingneuralmany the othernetwork same classification size, types orientation is small, tasks which and [34, 35brightness, can]. As be aexplained consequence, which byis notthe the usuallyobjects basic inthe neural the case images network for many always can otherhavingclassify classification the the same images taskssize, by orientation bright[34,35]. or As dark anda consequence, zones brightness, and does thewhich notbasic necessarilyis neuralnot usually network require the casecan learning classifyfor many and theotherextracting images classification by abstractions bright ortasks dark of [34,35]. 2D zones features Asand a which doesconsequence, not is thenecessarily main the advantagebasic require neural learning of convolutionalnetwork and can extract- classify neural ingthenetworks abstractions images [25 by, 26 ofbright]. 2D features or dark whichzones andis the does main not advantage necessarily of require convolutional learning neural and extract- net- worksing abstractions [25,26].For the required of 2D accuracy, features which the size isof the the main cut failureadvantage has toof beconvolutional considered. neural Because net- of workstheFor accuracy the[25,26]. required being belowaccuracy, 100%, the a size post of algorithm the cut failure is necessary has to whichbe considered. should report Because an errorof theonly accuracyFor when the being a required certain below amount accuracy, 100%, ofa post the failures sizealgorith of occurs them iscut in necessary failure a sequent has which number to be should considered. of images. report Becausean To error detect of onlythegeometrically when accuracy a certain being long amountbelow failures, 100%, of whichfailures a post can occursalgorith occur, inm e.g.,a is sequent necessary by unclean number which optics, of should images. our report classification To detect an error geometricallyonlysystem when is adequate. along certain failures, Veryamount which short of failures,failurescan occur, occurs like e.g., single byin aunclean burrsequent drops optics, number when our of cuttingclassification images. an To edge, sys-detect are geometricallyprobably not belong detectable failures, which with ourcan system.occur, e.g., It isby remarkable unclean optics, for theour resultsclassification with both sys- neural networks, however, that at least 92.8% accuracy (cf. Figure7) can be achieved

Sensors 2021, 21, 5831 12 of 16

with any network configuration independent from network type, number of nodes or filters. This means that about 93% of the images are easy to classify because they differ strongly between the categories. Furthermore, about 2% of the images can be classified by more complex neural networks (cf. Sections 3.2 and 3.3). About 5% of the images, mostly between the categories good cuts and cuts with burr formation, are very difficult to classify because the images are quite similar (cf. Figure3). For an industrial application, it has to be further considered whether the intentionally enforced cut failures are representative for typical industrial cut failures, e.g., as a result of unclean optics, which are not reproducible in scientific studies. The main advantage of the basic neural network is the much lower computation time between 0.1 ms and 0.32 ms, while the convolutional neural network requires 0.36 ms to 1.7 ms, respectively. For typical available industrial cameras having maximum frame rates in the range of 1000 Hz, a calculation time for the classification of about 1 ms is sufficient, which is fulfilled by all basic and most of our convolutional neural networks. A similar frame rate was also used by [5] when detecting burr formations during laser cutting. With maximum cutting speeds of modern laser machines in the range of 1000 mm/s still a local resolution of 1 mm is achieved which can clearly be considered as adequate for industrial use. Following this fundamental and comparative analysis, future investigations have to address field trials of the proposed sensor system and classification scheme in industrial cutting processes. Within such industrial environments additional error sources may appear and further reduce the cut quality, such as damaged gas nozzles or partially unclean optics which in turn are difficult to reproduce under laboratory conditions. The images from these error sources can be added to the training data and improve the detection rate of the classification system. To improve the detection rate it is also possible to classify not a single image but a series of 3 to 10 subsequent images, which reduces the influence of a single misleading image.

5. Conclusions Overall, with our neural network approach, two cut failures during laser cutting can be detected simultaneously by evaluating camera images with artificial neural networks. With different neural network designs up to 95.8% classification accuracy can be achieved. Generally, convolutional neural networks have only minor classification advantages of about 1% over basic neural networks, while the basic neural networks are considerably faster in calculation. The detection of cut interruptions is remarkably higher when com- pared to the burr formation, because the images of cut interruptions are more different from the good cuts compared to the images with burr formation. In general, the detection rate is high enough to advance industrial applications.

Author Contributions: Conceptualization, B.A.; methodology, B.A.; software, B.A.; validation, B.A., investigation, B.A.; writing—original preparation, B.A. and R.H.; project administration, R.H., All authors have read and agreed to the published version of the manuscript. Funding: This project was funded by the Bavarian Research Foundation. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest. Sensors 2021, 21, 5831 13 of 16

Appendix A Table of all performed laser cuts with machine parameters, cut category and use for training and test.

Nozzle Focus Nr: Laser Feed Rate Category Use Distance Position W mm/s mm mm 1 500 600 0.5 −1.25 Cut Training 2 300 400 0.5 −1.25 Cut Training 3 500 500 0.5 −1.25 Cut Training 4 500 300 0.5 −1.25 Cut Training 5 300 600 0.5 −1.25 Interruption Test 6 200 500 1,0 −1.75 Interruption Training 7 500 500 0.8 −1.55 Burr Training 8 500 600 0.5 −1.25 Cut Test 9 250 500 0.5 −1.25 Interruption Test 10 500 400 0.5 −1.25 Cut Test 11 500 600 0.5 −1.25 Interruption Training 12 500 500 1,0 −1.75 Burr Training 13 500 500 0.5 −1.25 Cut Test 14 500 300 0.5 −1.25 Cut Training 15 500 200 0.5 −1.25 Cut Test 16 500 500 0.5 −1.25 Cut Training 17 500 500 0.5 −1.25 Interruption Test 18 400 500 0.9 −1.65 Burr Training 19 500 500 0.8 −1.55 Burr Training 20 200 500 1,0 −1.75 Interruption Training 21 500 300 0.5 −1.45 Burr Training 22 150 500 0.5 −1.25 Interruption Test 23 500 400 0.5 −1.25 Cut Training 24 500 500 0.5 −1.25 Cut Test 25 500 400 0.5 −1.25 Training 26 400 500 0.8 −1.55 Burr Training 27 500 500 0.8 −1.55 Burr Test 28 150 500 0.5 −1.25 Interruption Training 29 200 500 1,0 −1.75 Interruption Test 30 400 500 0.9 −1.65 Burr Training 31 300 600 0.5 −1.25 Interruption Training 32 500 500 1,0 −1.75 Burr Training 33 500 300 0.5 −1.25 Cut Test Sensors 2021, 21, 5831 14 of 16

Nozzle Focus Nr: Laser Feed Rate Category Use Distance Position W mm/s mm mm 34 400 500 1,0 −1.75 Burr Test 35 500 500 0.5 −1.25 Interruption Training 36 500 400 0.5 −1.25 Cut Training 37 150 500 0.5 −1.25 Interruption Training 38 400 500 0.5 −1.45 Burr Training 39 500 600 0.5 −1.25 Interruption Training 40 400 400 0.5 −1.25 Cut Training 41 500 600 0.5 −1.25 Cut Training 42 500 600 0.5 −1.25 Interruption Test 43 400 500 0.8 −1.55 Burr Test 44 400 500 0.5 −1.45 Burr Test 45 400 400 0.5 −1.25 Cut Test 46 500 500 0.5 −1.25 Cut Training 47 500 200 0.5 −1.25 Cut Training 48 300 400 0.5 −1.25 Interruption Training 49 400 500 0.5 −1.45 Burr Training 50 500 400 0.5 −1.25 Cut Test 51 500 500 0.8 −1.55 Burr Training 52 400 500 0.9 −1.65 Burr Training 53 400 500 0.9 −1.65 Burr Test 54 500 300 0.5 −1.45 Burr Training 55 300 400 0.5 −1.25 Cut Training 56 500 300 0.5 −1.45 Burr Test 57 250 500 0.5 −1.25 Interruption Training 58 300 400 0.5 −1.25 Cut Training 59 300 400 0.5 −1.25 Interruption Training 60 300 600 0.5 −1.25 Interruption Training 61 300 400 0.5 −1.25 Interruption Test

References 1. Kratky, A.; Schuöcker, D.; Liedl, G. Processing with kW fibre lasers: Advantages and limits. In Proceedings of the XVII International Symposium on Gas Flow, Chemical Lasers, and High-Power Lasers, Lisboa, Portugal, 15–19 September 2008; p. 71311X. 2. Sichani, E.F.; de Keuster, J.; Kruth, J.; Duflou, J. Real-time monitoring, control and optimization of CO2 laser cutting of mild steel plates. In Proceedings of the 37th International MATADOR Conference, Manchester, UK, 25–27 July 2012; pp. 177–181. 3. Sichani, E.F.; de Keuster, J.; Kruth, J.-P.; Duflou, J.R. Monitoring and adaptive control of CO2 laser flame cutting. Phys. Procedia 2010, 5, 483–492. [CrossRef] 4. Wen, P.; Zhang, Y.; Chen, W. Quality detection and control during laser cutting progress with coaxial visual monitoring. J. Laser Appl. 2012, 24, 032006. [CrossRef] 5. Franceschetti, L.; Pacher, M.; Tanelli, M.; Strada, S.C.; Previtali, B.; Savaresi, S.M. Dross attachment estimation in the laser-cutting process via Convolutional Neural Networks (CNN). In Proceedings of the 2020 28th Mediterranean Conference on Control and Automation (MED), Saint-Raphaël, France, 16–18 September 2020; pp. 850–855. 6. Schleier, M.; Adelmann, B.; Neumeier, B.; Hellmann, R. Burr formation detector for fiber laser cutting based on a photodiode sensor system. Opt. Laser Technol. 2017, 96, 13–17. [CrossRef] Sensors 2021, 21, 5831 15 of 16

7. Garcia, S.M.; Ramos, J.; Arrizubieta, J.I.; Figueras, J. Analysis of Photodiode Monitoring in Laser Cutting. Appl. Sci. 2020, 10, 6556. [CrossRef] 8. Levichev, N.; Rodrigues, G.C.; Duflou, J.R. Real-time monitoring of fiber laser cutting of thick plates by means of photodiodes. Procedia CIRP 2020, 94, 499–504. [CrossRef] 9. Tomaz, K.; Janz, G. Use of AE monitoring in laser cutting and resistance spot welding. In Proceedings of the EWGAE 2010, Vienna, Austria, 8–10 September 2010; pp. 1–7. 10. Adelmann, B.; Schleier, M.; Neumeier, B.; Hellmann, R. Photodiode-based cutting interruption sensor for near-infrared lasers. Appl. Opt. 2016, 55, 1772–1778. [CrossRef] 11. Adelmann, B.; Schleier, M.; Neumeier, B.; Wilmann, E.; Hellmann, R. Optical Cutting Interruption Sensor for Fiber Lasers. Appl. Sci. 2015, 5, 544–554. [CrossRef] 12. Schleier, M.; Adelmann, B.; Esen, C.; Hellmann, R. Cross-Correlation-Based Algorithm for Monitoring Laser Cutting with High-Power Fiber Lasers. IEEE Sens. J. 2017, 18, 1585–1590. [CrossRef] 13. Adelmann, B.; Schleier, M.; Hellmann, R. Laser Cut Interruption Detection from Small Images by Using Convolutional Neural Network. Sensors 2021, 21, 655. [CrossRef] 14. Tatzel, L.; León, F.P. Prediction of Cutting Interruptions for Laser Cutting Using Logistic Regression. In Proceedings of the Lasers in Manufacturing Conference 2019, Munich, Germany, 24 July 2019; pp. 1–7. 15. Guo, Y.; Liu, Y.; Oerlemans, A.; Lao, S.; Wu, S.; Lew, M.S. Deep learning for visual understanding: A review. Neurocomputing 2016, 187, 27–48. [CrossRef] 16. Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 2018, 7068349. [CrossRef][PubMed] 17. Ting, F.F.; Tan, Y.J.; Sim, K.S. Convolutional neural network improvement for breast cancer classification. Expert Syst. Appl. 2018, 120, 103–115. [CrossRef] 18. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adeli, H. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Comput. Biol. Med. 2018, 100, 270–278. [CrossRef][PubMed] 19. Perol, T.; Gharbi, M.; Denolle, M. Convolutional neural network for earthquake detection and location. Sci. Adv. 2018, 4, e1700578. [CrossRef] 20. Dung, C.V.; Anh, L.D. Autonomous concrete crack detection using deep fully convolutional neural network. Autom. Constr. 2019, 99, 52–58. [CrossRef] 21. Zhang, L.; Yang, F.; Zhang, Y.D.; Zhu, Y.J. Road crack detection using deep convolutional neural network. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3708–3712. 22. Nakazawa, T.; Kulkarni, D. Wafer Map Defect Pattern Classification and Image Retrieval Using Convolutional Neural Network. IEEE Trans. Semicond. Manuf. 2018, 31, 309–314. [CrossRef] 23. Urbonas, A.; Raudonis, V.; Maskeliunas,¯ R.; Damaševiˇcius,R. Automated Identification of Wood Veneer Surface Defects Using Faster Region-Based Convolutional Neural Network with Data Augmentation and Transfer Learning. Appl. Sci. 2019, 9, 4898. [CrossRef] 24. Khumaidi, A.; Yuniarno, E.M.; Purnomo, M.H. Welding defect classification based on convolution neural network (CNN) and Gaussian kernel. In Proceedings of the 2017 International Seminar on Intelligent Technology and Its Applications (ISITIA), Surabaya, Indonesia, 28–29 August 2017; pp. 261–265. [CrossRef] 25. Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; van Esesn, B.C.; Awwal, A.A.S.; Asari, V.K. The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches. March 2018. Available online: http://arxiv.org/pdf/1803.01164v2 (accessed on 26 August 2021). 26. O’Shea, K.; Nash, R. An Introduction to Convolutional Neural Networks Nov 2015. Available online: http://arxiv.org/pdf/1511 .08458v2 (accessed on 26 August 2021). 27. Emura, M.; Landgraf, F.J.G.; Ross, W.; Barreta, J.R. The influence of cutting technique on the magnetic properties of electrical steels. J. Magn. Magn. Mater. 2002, 254, 358–360. [CrossRef] 28. Schoppa, A.; Schneider, J.; Roth, J.-O. Influence of the cutting process on the magnetic properties of non-oriented electrical steels. J. Magn. Magn. Mater. 2000, 215, 100–102. [CrossRef] 29. Adelmann, B.; Hellmann, R. Process optimization of laser fusion cutting of multilayer stacks of electrical sheets. Int. J. Adv. Manuf. Technol. 2013, 68, 2693–2701. [CrossRef] 30. Adelmann, B.; Lutz, C.; Hellmann, R. Investigation on and tensile strength of laser welded electrical sheet stacks. In Proceedings of the 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), Munich, Germany, 20–24 August 2018; pp. 565–569. 31. Arntz, D.; Petring, D.; Stoyanov, S.; Quiring, N.; Poprawe, R. Quantitative study of melt flow dynamics inside laser cutting kerfs by in-situ high-speed video-diagnostics. Procedia CIRP 2018, 74, 640–644. [CrossRef] 32. Tennera, F.; Klämpfla, F.; Schmidta, M. How fast is fast enough in the monitoring and control of laser welding? In Proceedings of the Lasers in Manufacturing Conference, Munich, Germany, 22–25 June 2015. 33. Keshari, R.; Vatsa, M.; Singh, R.; Noore, A. Learning Structure and Strength of CNN Filters for Small Sample Size Training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. Sensors 2021, 21, 5831 16 of 16

34. Chen, G.; Han, T.X.; He, Z.; Kays, R.; Forrester, T. Deep convolutional neural network based species recognition for wild animal monitoring. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 858–862. 35. Dong, Z.; Wu, Y.; Pei, M.; Jia, Y. Vehicle Type Classification Using a Semisupervised Convolutional Neural Network. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2247–2256. [CrossRef] 36. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. July 2017. Available online: http://arxiv.org/pdf/1707.01083v2 (accessed on 26 August 2021). 37. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-Level Accuracy with 50x Fewer Parameters and <0.5MB Model Size. February 2016. Available online: http://arxiv.org/pdf/1602.07360v4 (accessed on 26 August 2021). 38. Bello, I.; Zoph, B.; Vasudevan, V.; Le, Q.V. Neural Optimizer Search with Reinforcement Learning. September 2017. Available online: http://arxiv.org/pdf/1709.07417v2 (accessed on 26 August 2021). 39. An, S.; Lee, M.; Park, S.; Yang, H.; So, J. An Ensemble of Simple Convolutional Neural Network Models for MNIST Digit Recognition. arXiv 2020, arXiv:2008.10400.