energies

Article Level Crossing Barrier Machine Faults and with the Use of Motor Current Waveform Analysis

Damian Grzechca 1,* , Paweł Rybka 1,2 and Roman Pawełczyk 1,3

1 Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, 44-100 Gliwice, Poland 2 Digital Fingerprints S.A., 40-599 Katowice, Poland; Pawel.Rybka@fingerprints.digital 3 Alstom/Bombardier Transportation (ZWUS) Polska Sp. z o.o., 40-142 Katowice, Poland; [email protected] * Correspondence: [email protected]

Abstract: Barrier machines are a key component of automatic level crossing systems ensuring safety on railroad crossings. Their failure results not only in delayed railway transportation, but also puts human life at risk. To prevent faults in this critical safety element of automatic level crossing systems, it is recommended that fault and anomaly detection algorithms be implemented. Both algorithms are important in terms of safety (information on whether a barrier boom has been lifted/lowered as required) and predictive maintenance (information about the condition of the mechanical components). Here, the authors propose fault models for barrier machine fault and anomaly detection procedures based on current waveform observation. Several algorithms were   applied and then assessed such as self-organising maps (SOM), artificial neural network, local factor (LOF) and isolation forest. The advantage of the proposed solution is there is no Citation: Grzechca, D.; Rybka, P.; Pawełczyk, R. Level Crossing Barrier change of hardware, which is already homologated, and the use of the existing sensors (in a current Machine Faults and Anomaly measurement module). The methods under evaluation demonstrated acceptable rates of detection Detection with the Use of Motor accuracy of the simulated faults, thereby enabling a practical application at the test stage. Current Waveform Analysis. Energies 2021, 14, 3206. https://doi.org/ Keywords: crossing barrier machines; supply current; anomaly detection; neural networks; ; 10.3390/en14113206 outlier detection

Academic Editors: Jose A Antonino-Daviu, Toomas Vaimann and Anton Rassõlkin 1. Introduction The main task of automatic level crossing systems is to ensure the safety of road Received: 1 May 2021 users and approaching trains [1–3]. They must comply with the requirements of Safety Accepted: 25 May 2021 Integrity Level 4 [4] as the consequences of the majority of incidents are usually very severe. Published: 31 May 2021 According to the official statistical data for 2019 [5], there were 185 safety incidents related to level crossing systems in Poland and as a result 55 people were killed and 19 were Publisher’s Note: MDPI stays neutral severely wounded. with regard to jurisdictional claims in A barrier machine is one of the most important and widely used warning devices—a published maps and institutional affil- sample barrier machine is shown in Figure1. It is required that barrier booms are lowered iations. within a defined time interval after the pre-warning sequence is successfully completed. When the barrier booms are lowered, it is assumed that the road users are well protected against irresponsible entry to the level crossing area. This may appear an easy task, but it is very difficult to ensure near 100% reliability of barrier machines considering Copyright: © 2021 by the authors. obstacle detection, fault diagnosis and predictive maintenance [6]. The idea of increasing Licensee MDPI, Basel, Switzerland. the reliability of railway systems with the use of current waveform analysis and outlier This article is an open access article detection algorithms is presented in the paper. distributed under the terms and Predictive maintenance, classification of patterns of machine health and fault detection conditions of the Creative Commons Attribution (CC BY) license (https:// are significant aspects involved in developing innovative Industry 4.0 solutions [6,7]. creativecommons.org/licenses/by/ Researchers from all over the world have put a strong emphasis on improving the reliability 4.0/). of manufacturing process. The common approach to estimating the reliability of systems

Energies 2021, 14, 3206. https://doi.org/10.3390/en14113206 https://www.mdpi.com/journal/energies Energies 2021, 14, 3206 2 of 14

based on DC motors is to analyse their input current and any readings that deviate from standard waveforms may indicate stall, overload, damage, or failure [8,9]. Scientists have developed numerous solutions for examining the health of machines or classifying faults detected in such systems. For example, current signature analysis is used in induction motors to provide valuable information on bottle capping failures [10]. Supply current Energies 2021, 14, x FOR PEER REVIEW 2 of 15 analysis is also used in the automotive industry for examining the reliability of advanced driver-assistance systems (ADAS) modules [11].

FigureFigure 1. 1.Barrier Barrier machine machine of an of automatic an automatic level crossinglevel crossing system. system.

BarrierPredictive machine maintenance, design and functionalclassification requirements of patterns differ of dependingmachine health on the countryand fault detec- oftion application. are significant For the aspects purpose involved of this research, in developing requirements innovative valid for Industry two countries, 4.0 solutions i.e., [6,7]. Poland [9,10] and the United Kingdom [11] were taken into consideration. One of the Researchers from all over the world have put a strong emphasis on improving the relia- important factors that has an impact on the testing scenario is the range of the angular movementbility of manufacturing of the boom. To ensureprocess. a stableThe common upper position approach and provide to estimating self-falling the capa- reliability of bilities,systems the based barrier on boom DC motors when lifted is to does analyse not reach their a input fully verticalcurrent position. and any Similarly, readings in that devi- theate case from of sta thendard lower position,waveforms the boommay indicate cannot reach stall, a overload, fully horizontal damage, position or failure to enable [8,9]. Scien- adaptationtists have to developed the road profile numerous and ensure solutions reliable for operation. examining Considering the health these of constraints,machines or classi- thefying movement faults detected range of thein testsuch stand systems. was adjusted For example, to be 80 current◦ in total. signature analysis is used in inductionIn this paper,motors however, to provide we valuable put the emphasis information on detecting on bottle anomalies capping offailures any kind, [10]. Supply whichcurrent either analysis leads tois also estimating used in the the degree automotive of barrier industry machine for wear examining or detecting the their reliability of sudden damage [12]. Among all of the outlier detection algorithms, there are simple ones advanced driver-assistance systems (ADAS) modules [11]. such as scalar comparison (e.g., aggregate of the whole waveform—a mean, a standard deviation,Barrier etc.), machi but therene design are also and more functional sophisticated requirements ones such as differ analysing depending feature vectors on the country comparedof application. to the wholeFor the dataset. purpose The of vast this majority research, of methodsrequirements used for valid anomaly for two analysis countries, i.e., arePoland based [9,10 on unsupervised] and the United learning, Kingdom e.g., local [11] outlier were factor taken [13 into], isolation consideration. forest [14 One], one of the im- classportan supportt factors vector that machine has an [ 15impact], robust on covariancethe testing and scenario autoencoder is the neuralrange networkof the angular [16]. move- Thesement methods of the boom. are trained To ensure using a the stable whole upper population position of data,and provide and the modelsself-falling that arecapabilities, generatedthe barrier are boom capable when of distinguishing lifted does not between reach correct a fully vectors vertical and position. anomalies. Similarly, in the case 2.of Fault the lower Models position, the boom cannot reach a fully horizontal position to enable adapta- tion to the road profile and ensure reliable operation. Considering these constraints, the A barrier machine has a significant impact on overall safety and special attention must bemovement paid to its reliabilityrange of analysis.the test stand Therefore, was basedadjusted on experience to be 80° andin tot acquiredal. data, a set of the mostIn commonthis paper, faults however, was introduced. we put Four the representativeemphasis on barrierdetecting machine anomalies faults were of any kind, takenwhich into either consideration: leads to (a)estimating a boom hits the a soliddegree obstacle, of barrier (b) a boom machine hits a wear flexible or obstacle, detecting their (c)sudden a gear fault,damage and [12]. (d) white Among noise. all Allof the of the outlier relevant detection representative algorithms current, there waveforms are simple ones aresuch shown as scalar in Figure comparison2. All these (e.g. faults, aggregate seriously disruptof the barrierwhole machinewaveform performance—a mean, asa standard thedeviation lowering, etc.) of the, but barrier there booms are also is delayed more atsophisticated the level crossing ones area. such Consequently, as analysing train feature vec- tors compared to the whole dataset. The vast majority of methods used for anomaly anal- ysis are based on , e.g., local outlier factor [13], isolation forest [14], one class support vector machine [15], robust covariance and autoencoder neural network [16]. These methods are trained using the whole population of data, and the models that are generated are capable of distinguishing between correct vectors and anomalies.

2. Fault Models A barrier machine has a significant impact on overall safety and special attention must be paid to its reliability analysis. Therefore, based on experience and acquired data, a set of the most common faults was introduced. Four representative barrier machine faults were taken into consideration: (a) a boom hits a solid obstacle, (b) a boom hits a flexible obstacle, (c) a gear fault, and (d) white noise. All of the relevant representative current waveforms are shown in Figure 2. All these faults seriously disrupt barrier

Energies 2021, 14, x FOR PEER REVIEW 3 of 15

Energies 2021, 14, 3206 machine performance as the lowering of the barrier booms is delayed at the level crossing3 of 14 area. Consequently, train traffic is impacted by a faulty level crossing subsystem. The ac- cumulated duration of the delay increases, thereby resulting in significant losses to the rail operator. traffic is impacted by a faulty level crossing subsystem. The accumulated duration of the delay increases, thereby resulting in significant losses to the rail operator.

(a) (b)

FigureFigure 2. ( 2.a)( aCurrent) Current waveforms waveforms when when a barrier a barrier boom boom is is moving moving down. down. Correct Correct readings readings are are marked markedin grey; in anomaliesgrey; anomalies in pink in (boom pink (boom has hit has a solid hit a obstacle),solid obstacle), green green (white (white noise), noise), blue (gear blue fault),(gear red fault),(boom red has (boom hit a has flexible hit a obstacle);flexible obstacle); (b) Current (b) Current waveforms waveforms when a barrierwhen a boom barrier is boom going is up. going Correct up. Correct readings are marked in grey; anomalies in pink (a serious damage), green (white readings are marked in grey; anomalies in pink (a serious damage), green (white noise), blue (gear noise), blue (gear fault), red (obstacle has been hit). fault), red (obstacle has been hit).

TheThe exemplary exemplary vector vector of ofundistorted undistorted current current waveform waveform can can be be written written as asfollows: follows:

퐼 = [푖1, 푖2, … , 푖80] (1) I = [i1, i2,..., i80] (1) The whole dataset consists of N current waveforms, which is denoted by J: The whole dataset consists of N current waveforms, which is denoted by J: 퐽 = [퐼1; 퐼2; … ; 퐼푁] (2) J = [I1; I2;...; IN] (2) From (1) and (2), J can be written as follows: From (1) and (2), J can be written as follows: 푖1,1 ⋯ 푖1,80 ⋮ ⋱ ⋮ 퐽 = [i ··· i ]  (3) 푖1,1 ⋯ 푖 1,80  푁.,1 . 푁,80.  J =  . .. .  (3) iN,1 ··· iN,80 2.1. Boom Hits a Solid Obstacle 2.1.In Boom this fault Hits amodel Solid Obstacleit is assumed that the movement of the boom is stopped suddenly, for example,In this by fault hitting model the it isroof assumed of a car, that (a the fault movement model for of thea solid boom obstacle is stopped, FSO). suddenly, The currentfor example, in the barrier by hitting machine the roofmotor of acircuit car, (a increas faultes model following for a solidthe growing obstacle, tension FSO). of The thecurrent boom, insee the the barrier pink trace machine in Figure motor 2. circuitDepending increases on the following interface the circuit growing structure, tension the of overcurrentthe boom, activates see the pinkan overcurrent trace in Figure-circuit2. breaker Depending or an on electronic the interface protection circuit circuit. structure, In thethe first overcurrent case, the level activates crossing an overcurrent-circuit requires intervention breaker by maintenance or an electronic personnel. protection The circuit. re- sultingIn the delay first case,caused the by level a faulty crossing barrier requires machine intervention is significant. by maintenance In the other personnel. case, a fully The autonomousresulting delay resolution caused of by the a faultyproblem barrier is still machine possible. is significant.When an overcurrent In the other occurs, case, a the fully barrierautonomous machine resolution is stopped of and the the problem boom iscan still be possible. raised again. When After an overcurrent a defined period occurs, of the timebarrier another machine attempt is stoppedto lower andit may the be boom made can. be raised again. After a defined period of time another attempt to lower it may be made. 푖푟푎푛푑_푖푛푡(1,푁),푥 푖푓 푥 < 푘퐹푆푂 푖 ={ 퐹푆푂,푥 min(푖푟푎푛푑_푖푛푡(1,푁),푥 + 푖푟푖푠푒irand∗ (_푥int−(1,푘N퐹푆푂),x )i, f푖푠푡푎푙푙 x <)k푖푓FSO 푥 ≥ 푘퐹푆푂 푎푛푑 푥 ≤ 푛퐹푆푂 (4)    0 푖푓 푥 ≥ 푛퐹푆푂 iFSO,x = min irand_int(1,N),x + irise ∗ (x − kFSO), istall i f x ≥ kFSO and x ≤ nFSO (4)  0 i f x ≥ nFSO

where: iFSO,x is a motor current for FSO, rand_int(1, N) holds the same integer value for the whole IFSO vector, kFSO denotes an angle when a boom hits the obstacle, nFSO denotes Energies 2021, 14, 3206 4 of 14

an angle when the overcurrent protection is activated, istall = 9 is a stall of a DC motor and irise = 0.75 is the previously determined slope for a solid obstacle.

2.2. Boom Hits a Flexible Obstacle This fault model is to some extent similar to the previous one. The mechanical characteristics of the obstacle are the main difference. A typical example would be the overhanging branches of a tree or hitting an animal. It may be any other object that impacts the operation of the boom in a way that is hard to detect with a standard diagnostic system. Therefore, a fault model for a flexible obstacle (FFO) has to be applied. The motor’s characteristic supply current is locally distorted when the boom hits the obstacle. The barrier machine movement times are extended but might be still within an acceptable range. In this scenario, the mechanical condition of the boom might deteriorate rapidly. This situation is depicted by the red trace in Figure2.  irand_int(1,N),x i f x ≤ kFFO or x > kFFO + 5 iFFO,x = (5) irand_int(1,N),x ∗ F(x − kFFO) i f kFFO < x ≤ kFFO + 5

where: iFFO,x is a FFO motor current, rand_int(1, N) holds the same integer value for the whole IFFO vector, kFFO denotes an angle (indices) when a boom hits the obstacle, and F = {1.1, 1.3, 1.4, 1.3, 1.1} is an assumed FFO fault model.

2.3. Gear Fault In this case the fault model reflects the situation where the enclosure of the barrier machine is not tight enough. This might be due to damage or negligence by the maintenance staff, resulting in bent or incorrectly installed covers. Flooding may be another typical reason. As a result, friction losses within the mechanical gearbox increase as the dirt builds up. The levels of motor supply current are higher throughout the full barrier boom movement, see the blue trace in Figure2. This situation is also detected by the conventional diagnostic system, but only when the operation times of the barrier machine exceed the allowed range. The fault model for gear failure (FGF) is defined as

iFGF,x = kFGF ∗ irand_int(1,N),x (6)

where: iFGF,x is a motor current with distorted waveform, rand_int(1, N) holds the same integer value for the whole IFGF vector, and kFGF is a degradation coefficient that varies based on contamination level (typically kFGF ∈ h1.1, 1.2i).

2.4. White Noise White noise represents a general fault model of a situation where the characteristic motor supply current is distorted by some external or internal factor, see the green trace in Figure2. This usually happens due to weak wire connections or the corrosion process. The following fault model is simulated as a white noise (FN):

iFN,x = irand_int(1,N),x + kFN·rand(−1, 1), (7)

where: iFN,x is a motor current with distorted waveform, rand_int(1, N) holds the same integer value for the whole IFN vector, and kFN is a noise coefficient that varies based on noise level (maximum amplitude for kFN is 0.15, 0.30, 0.45, 0.60, 0.75).

3. Data Pre-Processing In order to develop a successful model, certain input data operations have to be performed. The training dataset contains 4974 current waveforms (2394 for the barrier boom going up and 2580 for the barrier boom going down) and the test dataset contains 96 waveforms, which are artificially generated to simulate numerous real-life faults. Each of the waveforms consists of 80 data points (features) that indicate the instantaneous current Energies 2021, 14, x FOR PEER REVIEW 5 of 15

waveforms, which are artificially generated to simulate numerous real-life faults. Each of the waveforms consists of 80 data points (features) that indicate the instantaneous current Energieslevel 2021for, a14 ,specific x FOR PEER angular REVIEW position of a barrier boom. Each point corresponds to an angu-5 of 15 Energies 2021, 14, 3206 5 of 14 lar movement of the boom by one degree. Next, the following steps were proposed, as shown in Figure 3: waveforms, which are artificially generated to simulate numerous real-life faults. Each of • Data cleaning—levelremovalthe for waveforms a specific of the angular consists damaged position of 80 data ofwaveforms a points barrier (features) boom. from Each that pointthe indicate training corresponds the instantaneous dataset. to an angular Thecurrent data may be corruptedmovementlevel for ofdue a the specific to boom unexpected angular by one degree.position power Next, of a barrier the outages, following boom. installation Each steps point were corresponds proposed, works, as equip-to shown an angu- in Figurelar movement3: of the boom by one degree. Next, the following steps were proposed, as ment failures, etc.• shownData cleaning—removal in Figure 3: of the damaged waveforms from the training dataset. The • Data scaling—the• data useData may of cleaning bea corruptedmin—-maxremoval due scaler to of unexpected the (mostlydamaged power waveforms for outages, autoencoder from installation the training performance works, dataset. equip- The boosting). mentdata failures, may etc.be corrupted due to unexpected power outages, installation works, equip- • Data scaling—the use of a min-max scaler (mostly for autoencoder performance boosting). • Data labelling—eitherment with failures, the use etc. of prior knowledge or clustering methods • •Data Data labelling—either scaling—the with use the of a use min of-max prior scaler knowledge (mostly or clustering for autoencoder methods performance • Clustering model• (optional)Clusteringboosting) model. . (optional). • Data labelling—either with the use of prior knowledge or clustering methods • Clustering model (optional).

Figure 3. Block diagram representing data pre-processing pipeline. Figure 3. Block diagramFigure representing 3. Block diagram data representing pre-processing data pre-processing pipeline. pipeline. 3.1. Dataset Preparation 3.1. Dataset Preparation The first operation to be performed was to divide input data into two subgroups, that 3.1. Dataset Preparation The first operation to be performed was to divide input data into two subgroups, that is, readouts indicating downward boom movement (lowering of the barrier boom) and The first operationis, readouts to be performed indicating downward was to boomdivide movement input data (lowering into of two the barriersubgroups boom), andthat readoutsreadouts indicating indicating upward upward boom boom movement movement (lifting (lifting of the of barrierthe barrier boom). boom). Additionally, Additionally, is, readouts indicatingoutliers outliersdownward also also must must be boom labelled be labelled movement based based on expert on expert (lowering knowledge. knowledge. Theof the The dataset dbarrierataset preparation preparation boom) process andprocess readouts indicatingis upward depictedis depicted inboom Figure in Figure 4movement. 4. (lifting of the barrier boom). Additionally, also must be labelled based on expert knowledge. The dataset preparation process is depicted in Figure 4.

(a) (b)

FigureFigure 4. (a) 4. Current (a) Current waveforms waveforms of all of readouts all readouts divided divided into groupsinto groups according according to their to the status:ir status closing: closing of the of barrier the barrier machine machine (yellow)(yellow) and openingand opening the barrier the barrier machine machine (blue); (blue); (b) Correct (b) Correct current current readouts readouts (grey) (grey) and outliers and outliers (red). (red).

3.2. Data Scaling and Labelling The second step was data scaling so that algorithms such as neural networks will not favour larger values over smaller ones (where both are equally significant). The scaler (a) used in the application is a MinMax scaler (8)( band) its formula is as follows Figure 4. (a) Current waveforms of all readouts divided into groups according to their status: closing of the barrier machine (yellow) and opening the barrier machine (blue); (b) Correct current readouts (grey) and outliers (red).

3.2. Data Scaling and Labelling The second step was data scaling so that algorithms such as neural networks will not favour larger values over smaller ones (where both are equally significant). The scaler used in the application is a MinMax scaler (8) and its formula is as follows

Energies 2021, 14, x FOR PEER REVIEW 6 of 15

Energies 2021, 14, 3206 6 of 14

푆 푖푎,푏 − min (푖1,푏, 푖2,푏, … , 푖푁,푏) 푖푎,푏 = (8) 3.2. Data Scalingmax( and푖1,푏, Labelling푖2,푏, … , 푖푁,푏) − min (푖1,푏, 푖2,푏, … , 푖푁,푏) where: The second step was data scaling so that algorithms such as neural networks will not favour larger values over smaller ones (where both are equally significant). The scaler used 푆 • 푖푎,푏 is an outputin thescaled application feature is avector MinMax scaler (8) and its formula is as follows • 푖 is an input feature vector. 푎,푏 i − min(i , i ,..., i ) iS = a,b 1,b 2,b N,b (8) a,b max(i , i ,..., i ) − min(i , i ,..., i ) 3.3. Clustering Method 1,b 2,b N,b 1,b 2,b N,b The vast majoritywhere: of current waveforms (with regard to barrier machine systems) • iS is an output scaled feature vector come with labelled dataa,, beach reading is marked with a flag denoting whether the boom • i is going up or down. Unfortunatela,b is an inputy, feature there vector. are systems that do not provide such handy data; in these cases,3.3. the Clustering waveforms Method need to be labelled in order to apply the correct clas- sifier in further analysis.The To vastdo so, majority there of are current numerous waveforms algorithms (with regard that tocan barrier be applied machine such systems) as k-nearest neighboucomers, with Gaussian labelled data,mixture each readingmodels, is markedDBSCAN with or a flagothers denoting (one whether of the theclus- boom is tering method, a selfgoing-organizing up or down. map Unfortunately, is described there arefurther systems on that in do the not paper). provide Whensuch handy the data; waveform is correctlyin these labelled, cases, anomalies the waveforms can need be analysed. to be labelled in order to apply the correct classifier in further analysis. To do so, there are numerous algorithms that can be applied such as k-nearest neighbours, Gaussian mixture models, DBSCAN or others (one of the clustering 4. Outlier Detectionmethod, a self-organizing map is described further on in the paper). When the waveform Anomalies weisre correctlydivided labelled, into two anomalies major cansubgroups be analysed.—outliers [17] and novelties [18]. The first is based on4. Outlierthe fact Detection that there is a certain number of deviations among all the readouts in the trainingAnomalies dataset. These were divided algorithms into two find major more subgroups—outliers populated regions [17] and and novelties sepa- [18]. rate them from distantThe firstsam isples based, which on the indicate fact that there anomalies. is a certain The number other of approach deviations, amongnovelty all the detection, takes onlyreadouts the unpolluted in the training subset dataset. of data These into algorithms account find(for morethe sake populated of training) regions and and focuses on detectingseparate anomalies them from distant as samples samples, not which belonging indicate anomalies. to a pre The-trained other approach, group. In novelty detection, takes only the unpolluted subset of data into account (for the sake of training) the sub-sections belowand, focuses we compare on detecting four anomalies highly differ as samplesent anomaly not belonging detection to a pre-trained algorithms group.: In local outlier factor, theisolation sub-sections forest, below, self- weorganizing compare four map highly and differentautoencoder anomaly neural detection network algorithms:. Having preparedlocal the outlier data, factor, the isolationanomaly forest, detection self-organizing algorithms map andcan autoencoder be applied. neural The network.re- maining steps to be takenHaving include prepared training the data, the model the anomaly and feeding detection unknown algorithms data can be into applied. it to The remaining steps to be taken include training the model and feeding unknown data into it obtain a continuousto probability obtain a continuous score or probability a binary scoreclassification or a binary score. classification The whole score. data The whole flow data is shown in Figure 5.flow is shown in Figure5.

Figure 5. Block diagram representing the data flow in the presented approach. Figure 5. Block diagram representing the data flow in the presented approach.

4.1. Local Outlier Factor (LOF) One of the most popular approaches for outlier searching is LOF. The main idea be- hind this algorithm is to compare a sample’s local density with local densities of its k- nearest neighbours. The idea of reachability distance (RD) and local reachability density (LRD) derived from local density is represented by the Formulas (9, 10), presented below, and is directly used to calculate the LOF score (11).

푅퐷푘(퐼퐴, 퐼퐵) = max(푁퐷푘(퐼퐵), 푑(퐼퐴, 퐼퐵)) (9)

Energies 2021, 14, 3206 7 of 14

4.1. Local Outlier Factor (LOF) One of the most popular approaches for outlier searching is LOF. The main idea

Energies 2021, 14, x FOR PEER REVIEWbehind this algorithm is to compare a sample’s local density with local densities7 of of 15 its k-nearest neighbours. The idea of reachability distance (RD) and local reachability density (LRD) derived from local density is represented by the Formulas (9, 10), presented below, and is directly used to calculate the LOF score (11). 1 퐿푅퐷 (퐼 ) = RDk(IA, I푘B)퐴= max∑ (NDk(IB), (d(IA), IB)) (9) 푗∈푁푘(퐼퐴) 푅퐷푘 퐼퐴, 푗 (10) |푁푘(1퐼퐴)| LRD (I ) = (10) k A ∑ RD (I ,j) j∈Nk(IA) k A ∑푗∈푁 (퐼 ) 퐿푅퐷푘(푗) 푘 |N퐴k(IA)| 퐿푂퐹푘(퐼퐴) = (11) |푁푘(퐼퐴)|퐿푅퐷푘(퐼퐴) ∑ ∈ ( ) LRD (j) = j Nk IA k where: LOFk(IA) (11) |Nk(IA)|LRDk(IA) ( ) • where:k-distance 푁퐷푘 퐼퐵 is defined as a distance between object 퐼퐵 to its k-th neighbour • 푁푘(퐼퐴) denotes the set of k nearest neighbors for object 퐼퐴. • k-distance NDk(IB) is defined as a distance between object IB to its k-th neighbour The bigger the local density, the more likely the sample is an outlier. Figure 6 shows • Nk(IA) denotes the set of k nearest neighbors for object IA. the outcome of outlier detection for the three most anomalous readings with the number The bigger the local density, the more likely the sample is an outlier. Figure6 shows of neighbours equalling k = 20. the outcome of outlier detection for the three most anomalous readings with the number of neighbours equalling k = 20.

(a) (b)

FigureFigure 6. 6.(a()a Local) Local outlier outlier factor factor on on the the whole whole training training dataset dataset (boom (boom moving moving down); down); (b (b)) Three Three most mostanomalous anomalous waveforms waveforms (red) (red) among among the wholethe whole training training dataset dataset (grey). (grey).

TheThe local local outlier outlier factor factor algorithm algorithm managed managed to to successfully successfully classify classify two two of of the the most most anomalousanomalous waveforms, waveforms, but but incorrectly incorrectly classified classified thethe remainingremaining one one (see (see Figure Figure4 for4 for expert- ex- pertchosen-chosen outliers). outliers).

4.2.4.2. Isolation Isolation Forest Forest (iForest) (iForest) IsolationIsolation forest forest is is widely widely known known for its lowlow computationalcomputational complexity complexity and and its its effective- effec- tivenessness on on high-dimensional high-dimensional data. data. Unlike Unlike the majoritythe majority of anomaly of anomaly detection detection methods, methods, iForests iForestsis based is onbased actually on actually isolating isolating anomalies anomalies instead ofinstead profiling of profiling correct readings. correct readings. Its operation Its operationis based is on based binary on search binary trees search and trees can and be described can be described as dividing as dividing hyperplane hyperplane n-times n for- timeseach for sample each sample (until a (until sufficient a sufficient number numbe of slicesr of for slices distinguishing for distinguishing one particular one particular object is objectachieved) is achieved) and building and building forests forests on those on sets those of sets divisions. of divisions. The less The divisions less divisions each sample each samplerequires requires for its homogenousfor its homogenous classification, classification, the more the likely more it likely is an anomaly.it is an anomaly. The anomaly The anomalyscore for score iForest for isiForest described is described by Formula by F (12)ormula below. (12) below.

퐸E((ℎh((퐼IA)))) −− 퐴 IF(I , n) = 2 푐c((푛n)) ,(12) (12) 퐼퐹(퐼A퐴, 푛) = 2 , where:

• ℎ(퐼퐴) is the level of the branch for classifying observation 퐼퐴 • 푐(푛) is the average branch level • n denotes the number of external nodes • 퐸(ℎ(퐼퐴)) denotes the average value of ℎ(퐼퐴) for all the isolation trees.

Energies 2021, 14, 3206 8 of 14

where: Energies 2021, 14, x FOR PEER REVIEW 8 of 15 • h(IA) is the level of the branch for classifying observation IA • c(n) is the average branch level • n denotes the number of external nodes • TheE( firsth(IA glance)) denotes of this the approach average value (training of h (dataset)IA) for all on the the isolation current waveforms trees. of bar- rier machinesThe first (boom glance going of this down) approach is presented (training in dataset) Figure on 7 below. the current waveforms of barrier machines (boom going down) is presented in Figure7 below.

(a) (b)

FigureFigure 7. ( 7.a) iForest(a) iForest raw rawscoring scoring function function on the on whole the whole training training dataset dataset (boom (boommoving moving down); down);(b) Three(b) Threemost anomalous most anomalous waveforms waveforms (red) among (red) among the whole the whole training training dataset dataset (grey). (grey).

IsolationIsolation forest forest correctly correctly classified classified the the most most outlying outlying waveform, waveform, but but incorrectly incorrectly clas- classi- sifiedfied the the other other two. two.

4.3.4.3. Self Self-Organizing-Organizing Map Map (SOM (SOM)) SelfSelf-organizing-organizing maps maps are are a great a great tool tool when when it itcomes comes to tocomputing computing two two-dimensional-dimensional (commonly)(commonly) representations representations of ofinput input spaces spaces and and preserving preserving their their topology. topology. As As the the algo- algo- rithmrithm progresses, progresses, the the map map becomes becomes more more and and more more like like the the population population;; the the neurons neurons start start to tocover cover the the whole whole dataset dataset in interms terms of of(e.g. (e.g.,, Euclidean) Euclidean) distance. distance. As As the the algorithm algorithm pro- pro- gresses,gresses, the the random random n byn by m mvectorvector map map is isupdated updated so sothat that for for each each randomly randomly picked picked input input vector I from the J population of all current samples vectors [I , I ,..., I ], vector 퐼푟푎푛푑rand_푖푛푡_int(1(,푁1,)N from) the J population of all current samples vectors [퐼1 ,1퐼1 , …1 , 퐼푁], itsN bestits matching best matching unit (BMU) unit (BMU) is found is found in the in Korhonen the Korhonen map. map. This ThisW unitW unitand andtheir their closest clos- neighboursest neighbours are updated are updated (Formula (Formula 13) with 13) the with learning the η rateuntilη alluntil the all iterations the iterations are are finished. finished. 0   W = W + η Irand_int(1,N) − W (13) ′ 푊 = 푊 + 휂(퐼푟푎푛푑_푖푛푡(1,푁) − 푊) (13) This feature enables the use of SOM as an outlier detection algorithm. The readings thatThis are feature placed inenables less populated the use of regions SOM mayas an be outlier considered detection anomalies algorithm while. T correcthe readings vectors thattend are to placed form largerin less clusters,populated as shownregions inmay Figure be considered8. anomalies while correct vec- tors tendAlthough to form SOMlarger turned clusters out, as to shown be a perfect in Figure classifier 8. (all the readings were clustered correctly into two subsets), the hypothesis on finding anomalies in less populated areas appeared to be incorrect, that is, contrary to expectations, outliers were assigned to bigger clusters. To get around this problem, the SOM algorithm was fed with clustered data (both subsets) and the outcome of this approach is presented in Figure9.

Energies 2021, 14, x FOR PEER REVIEW 9 of 15

EnergiesEnergies2021 2021, 14, 14, 3206, x FOR PEER REVIEW 9 ofof 1415

(a) (b) Figure 8. (a) Distance map for the whole dataset; (b) SOM representation of barrier machine opening (yellow) and closing (blue) current waveforms.

Although SOM turned out to be a perfect classifier (all the readings were clustered correctly into two subsets), the hypothesis on finding anomalies in less populated areas appeared(a) to be incorrect, that is, contrary to expectations, outliers(b) were assigned to bigger clusters. To get around this problem, the SOM algorithm was fed with clustered data (both FigureFigure 8. 8.( a(a)) Distance Distance map map for for the the whole whole dataset; dataset; ( b(b)) SOM SOM representation representation of of barrier barrier machine machine opening opening (yellow) (yellow) and and closing closing subsets) and the outcome of this approach is presented in Figure 9. (blue)(blue) current current waveforms. waveforms.

Although SOM turned out to be a perfect classifier (all the readings were clustered correctly into two subsets), the hypothesis on finding anomalies in less populated areas appeared to be incorrect, that is, contrary to expectations, outliers were assigned to bigger clusters. To get around this problem, the SOM algorithm was fed with clustered data (both subsets) and the outcome of this approach is presented in Figure 9.

(a) (b)

FigureFigure 9. 9.(a)( aSOM) SOM representation representation of of barrier barrier machine machine opening opening current waveforms;waveforms; ((bb)) SOM SOM representation representation of of barrier barrier machine ma- chineclosing closing current current waveforms. waveforms.

SuchSuch a division a division of ofdatasets datasets caused caused another another problem problem,, as asvectors vectors in inthe the same same subgroup subgroup diddid not not significantly significantly differ differ from from one one another, another, the maps werewere almostalmost equallyequally covered covered by by the the(asamples,) samples thereby, thereby disabling disabling any any further further cauterization cauterization or or outlier outlier(b) detection. detection.

Figure 9. (a) SOM representation4.4. Autoencoder of barrier machine NeuralNetwork opening current waveforms; (b) SOM representation of barrier ma- chine closing current waveforms. Neural networks are becoming increasingly popular as the computational capabil-

ities ofSuch computers a division develop. of datasets They caused are widely another used problem for image, as vectors recognition, in the same classification, subgroup regressiondid not significantly and much more.differ from Among one all another, the back the propagation maps were neuralalmost networkequally covered structures by therethe samples are several, thereby subgroups, disabling such any as further artificial cauterization neural networks or outlier [19], detect convolutionalion. neural

Energies 2021, 14, x FOR PEER REVIEW 10 of 15

4.4. Autoencoder Neural Network Neural networks are becoming increasingly popular as the computational capabili- ties of computers develop. They are widely used for image recognition, classification, re-

Energies 2021, 14, 3206 gression and much more. Among all the back propagation neural network structures10 of 14 there are several subgroups, such as artificial neural networks [19], convolutional neural net- works [20], recurrent neural networks [21], and others. For the purpose of anomaly detec- tion, one network architecture, i.e., autoencoder [22,23] is widely used. It is based on the networksidea of [ 20compressing], recurrent information neural networks into lower [21], and dimensionality others. For the and purpose then decoding of anomaly it. The net- detection,work generalizes one network to architecture, the most common i.e., autoencoder samples so [22 ,the23] coding is widely–decoding used. It isprocess based is bur- on the idea of compressing information into lower dimensionality and then decoding it. dened with the most minor sample error, while outliers respectively cause a larger mis- The network generalizes to the most common samples so the coding–decoding process match between input and output. The network shown in Figure 10 was created for the is burdened with the most minor sample error, while outliers respectively cause a larger mismatchpurpose between of finding input outliers and output. in the The delivered network dataset. shown in Figure 10 was created for the purpose of finding outliers in the delivered dataset.

(a) (b)

FigureFigure 10. 10.(a )(a Visual) Visual representation representation of of the the autoencoder’s autoencoder’s architecture; architecture; (b ()b Visualization) Visualization of of encoding encoding neurons neurons for for the the train- traininging dataset. dataset.

FigureFigure 11 shows 11 shows the outcome the outcome of outlier of outlier detection detection for the autoencoderfor the autoencoder shown above shown in above Energies 2021, 14, x FOR PEER REVIEW 11 of 15 Figurein Figure 10, with 10 an, with input an layer input size layer of 80,size two of encoding80, two encoding neurons, neurons, a decoding a decoding layer size oflayer size 80,of rectified 80, rectified linear unitlinear (ReLU) unit (ReLU) encoding encoding activation activation function, function, sigmoid decoding sigmoid activationdecoding activa- function,tion function, 3000 training 3000 epochstraining and epochs a batch and size a batch equal size to 1. equal to 1.

(a) (b)

Figure 11.Figure(a) Average 11. (a) meanAverage squared mean error squared of autoencoder’s error of autoencoder decoding for’ eachs decoding waveform for in each the population;waveform (inb) the Three most anomalouspopulation; waveforms (b (red)) Three selected most with anomalous autoencoder waveforms from the (red) population selected (grey). with autoencoder from the popu- lation (grey).

5. Comparison of the Methods For assessing the accuracy of each method, we used 50 supply current waveforms gathered from a real barrier machine and 43 waveform models from Section 2. The simu- lated faults included: boom hits a solid obstacle, e.g., a car (FSOsim = 13 waveforms), gear wear at certain levels (FGFsim = 10 waveforms), boom hits a flexible obstacle, e.g., a branch or an animal (FFOsim = 10 waveforms), and supply current noises of different amplitude (FNsim = 10 waveforms). All of the proposed anomaly detection algorithms (LOF, iForest and autoencoder) were tested on these values. Table 1 shows the confusion matrices which include the nu- merical values for true positive (TP), true negative (TN), false positive (FP) and false neg- ative (FN) classifier hits.

Table 1. Confusion matrices for LOF, iForest and autoencoder.

LOF Positive Negative Predicted Positive TP: 315 FP: 61 Predicted Negative FN: 85 TN: 339 iForest Positive Negative Predicted Positive 291 42 Predicted Negative 109 358 Autoencoder Positive Negative Predicted Positive 330 57 Predicted Negative 70 343

Additionally, sensitivity, specificity and accuracy scores (14) are presented in Table 2. 푇푃 푠푒푛푠푖푡푖푣푖푡푦 = , 푇푃 + 퐹푁 푇푁 푠푝푒푐푖푓푖푐푖푡푦 = , (14) 푇푁 + 퐹푃 푇푁 + 푇푃 푎푐푐푢푟푎푐푦 = ,, 푇푁 + 푇푃 + 퐹푁 + 퐹푃

Energies 2021, 14, 3206 11 of 14

5. Comparison of the Methods For assessing the accuracy of each method, we used 50 supply current waveforms gathered from a real barrier machine and 43 waveform models from Section2. The simu- lated faults included: boom hits a solid obstacle, e.g., a car (FSOsim = 13 waveforms), gear wear at certain levels (FGFsim = 10 waveforms), boom hits a flexible obstacle, e.g., a branch or an animal (FFOsim = 10 waveforms), and supply current noises of different amplitude (FNsim = 10 waveforms). All of the proposed anomaly detection algorithms (LOF, iForest and autoencoder) were tested on these values. Table1 shows the confusion matrices which include the numerical values for true positive (TP), true negative (TN), false positive (FP) and false negative (FN) classifier hits.

Table 1. Confusion matrices for LOF, iForest and autoencoder.

LOF Positive Negative Predicted Positive TP: 315 FP: 61 Predicted Negative FN: 85 TN: 339 iForest Positive Negative Predicted Positive 291 42 Predicted Negative 109 358 Autoencoder Positive Negative Predicted Positive 330 57 Predicted Negative 70 343

Additionally, sensitivity, specificity and accuracy scores (14) are presented in Table2.

TP TN TN + TP sensitivity = ,specificity = ,accuracy = (14) TP + FN TN + FP TN + TP + FN + FP

Table 2. Sensitivity and specificity scores for LOF, iForest, and Autoencoder.

Score LOF iForest Autoencoder Sensitivity 79% 73% 83% Specificity 85% 90% 86% Accuracy 82% 81% 84%

The data presented in Tables1 and2 show the sensitivity and specificity scores of various classifiers optimized for reaching maximum accuracy. As can be seen, the autoencoder achieved the best result for accuracy of 84%. Nonetheless, local outlier factor performs surprisingly well for a method that uses only neighbourhood similarity metrics, obtaining a slightly worse score for both sensitivity and specificity. Isolation forest did far worse in regard to sensitivity compared to both autoencoder and LOF, but on the other hand it showed the best specificity (90%). Detailed information on the specific anomaly detection rate can be found in Table3 (scores obtained for the same models as used for creating Tables1 and2). Energies 2021, 14, 3206 12 of 14

Table 3. Anomaly detection rate for the proposed approaches.

Detection Rate Anomaly LOF iForest Autoencoder Severe gear damage 100% 93% 100% Obstacle hit 47% 30% 68% White noise 86% 92% 92% Gear wear 82% 76% 82% Non-anomalous waveforms 85% 90% 86% classified correctly

When it comes to detecting anomalies using SOM (using the same data as for evaluat- ing all the models listed above), 97.5% of total anomalies concerning closing the barrier machine were placed in the most populated cluster (the one that contains around 8% of all non-anomalous waveforms). For the barrier boom going up 97.6% of total anomalies concerning closing the barrier machine were grouped in the fourth most populated cluster (the one containing about 6% of all non-anomalous waveforms). Such an output completely disqualifies Kohonen maps from further use as the vast majority of anomalies are classified in the same manner as the most representative inliers. The sensitivity and specificity scores can both be increased at the expense of lowering the other (using the cut-off value settings). Such a feature plays an important role when taking into account, e.g., the cost of unnecessary relocations of service personnel—in this case the specificity score should be boosted (by raising the cut-off). On the other hand, when the barrier machine is placed in a dense traffic area, it is better to react to even false-positive alarms than to disregard even one real alarm indication (this is where the sensitivity score becomes important). However, we must remember to tune these settings Energies 2021, 14, x FOR PEER REVIEW with great caution. The ROC curve accurately represents the model’s performance13 of in 15 terms of tuning the cut-off values (Figure 12). The curve is defined as a function of the true positive rate vs. false positive rate values, the higher the area under the curve, the better a particular model performs. A comparison of the models proposed in this paper in terms of ROC curve visualization is presented below.

(a) (b)

FigureFigure 12. ( a12.) ROC (a) curvesROC ofcurves used methods of used for method model concernings for model the concerning boom moving the down boom (b) ROC moving curves down of used ( methodsb) ROC for modelcurves concerning of used method the booms movingfor model up. concerning the boom moving up.

What needs to be kept in mind is the nature of a particular approach, e.g., autoencoders What needs toare be the kept best whenin mind it comes is the to performance, nature of buta particular they are also approach the most computationally, e.g., autoen- com- coders are the bestplex; when it takes it comes several to times performance, longer to train but such they a model are thanalso the the LOF most or forest. computa- Moreover, tionally complex; itheuristic takes several methods times such aslonger autoencoder to train and such iForest a model may provide than the different LOF modelsor forest. for the Moreover, heuristic methods such as autoencoder and iForest may provide different mod- els for the same training dataset, so performing several experiments (besides optimizing hyperparameters) may be a good way to obtain a proper output. Such a step is not appli- cable to deterministic algorithms such as LOF, as the model will always remain constant if training is run on the same data.

6. Conclusions This paper discusses a very important aspect of the automatic level crossing systems. Faultless performance of barrier machines has a significant impact on the safety and avail- ability of railway systems. We introduced four fault models for the barrier machine and proposed a fault and anomaly detection procedure based on the current waveform meas- urements. All of the barrier machine models were based on real observations acquired at the test bench. -based algorithms were proposed to detect a prede- fined set of faults. Among the various methods, the autoencoder method seems to be the most suitable because it had the highest sensitivity score. This enables correct detection of the defined faults, i.e., a barrier machine boom hits a flexible obstacle (FFO), and the me- chanical gearbox of the barrier machine is degraded. Repeated detection of such events should trigger earlier intervention by maintenance staff. All the presented methods proved to be effective in terms of detecting serious fault states when a barrier boom hits a solid obstacle (FSO) while rising or falling. The applied procedures with predefined faults provide new insights into a specific application—a barrier machine equipped with a boom. The proposed approach may be applied for the purpose of dynamically developing preventive maintenance within the railway industry and may increase safety at level crossings. It is worth pointing out that this additional diagnostic functionality can be added to the railway system as an isolated block; the current measurement module and processing module can easily be galvanically separated from the existing circuits of the system. The output information can be merged with the existing stream of data observed and analysed by the maintenance staff. This approach ensures that the diagnostic part of the system will not have any impact on its safety. The other option is to directly incorpo- rate the new functionality into existing control algorithms of the level crossing system. The resulting information indicating a detected anomaly shall be processed according to the existing or newly defined requirements. Because the broken barrier machine equally affects railway and road users, a highly reliable and cost-effective solution is needed to

Energies 2021, 14, 3206 13 of 14

same training dataset, so performing several experiments (besides optimizing hyperpa- rameters) may be a good way to obtain a proper output. Such a step is not applicable to deterministic algorithms such as LOF, as the model will always remain constant if training is run on the same data.

6. Conclusions This paper discusses a very important aspect of the automatic level crossing sys- tems. Faultless performance of barrier machines has a significant impact on the safety and availability of railway systems. We introduced four fault models for the barrier machine and proposed a fault and anomaly detection procedure based on the current waveform measurements. All of the barrier machine models were based on real observations ac- quired at the test bench. Artificial intelligence-based algorithms were proposed to detect a predefined set of faults. Among the various methods, the autoencoder method seems to be the most suitable because it had the highest sensitivity score. This enables correct detection of the defined faults, i.e., a barrier machine boom hits a flexible obstacle (FFO), and the mechanical gearbox of the barrier machine is degraded. Repeated detection of such events should trigger earlier intervention by maintenance staff. All the presented methods proved to be effective in terms of detecting serious fault states when a barrier boom hits a solid obstacle (FSO) while rising or falling. The applied procedures with predefined faults provide new insights into a specific application—a barrier machine equipped with a boom. The proposed approach may be applied for the purpose of dynamically developing preventive maintenance within the railway industry and may increase safety at level crossings. It is worth pointing out that this additional diagnostic functionality can be added to the railway system as an isolated block; the current measurement module and processing module can easily be galvanically separated from the existing circuits of the system. The output information can be merged with the existing stream of data observed and analysed by the maintenance staff. This approach ensures that the diagnostic part of the system will not have any impact on its safety. The other option is to directly incorporate the new functionality into existing control algorithms of the level crossing system. The resulting information indicating a detected anomaly shall be processed according to the existing or newly defined requirements. Because the broken barrier machine equally affects railway and road users, a highly reliable and cost-effective solution is needed to prevent faults, traffic disruption and assure public safety. Considering the impact of failure-free performance of barrier machines on safety, it is advisable and justifiable to continue the research on this issue. The promising results encourage further analysis, which might include examining the possibility of detecting gusts of winds of a speed close to a defined maximum (especially in the case of barrier machine applications with long booms and skirts), as well as the verification and further improvement of the presented methods using real-life data.

Author Contributions: Conceptualization, D.G. and P.R. and R.P.; methodology, D.G. and P.R. and R.P.; software, P.R., R.P.; validation, D.G. and P.R. and R.P.; formal analysis, D.G., P.R. and R.P.; investigation, D.G. and P.R. and R.P.; resources, R.P.; data curation, R.P.; writing—original draft preparation, P.R.; writing—review and editing, D.G., P.R. and R.P.; visualization, P.R.; supervision, D.G. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available as non-disclosure agreement is required. Acknowledgments: This work was supported in part by the Polish Ministry of Science and Higher Education as part of the Implementation Doctorate program at the Silesian University of Technology, Gliwice, Poland (contract no. 0053/DW/2018), and partially by Statutory Research funds from Energies 2021, 14, 3206 14 of 14

the Department of Electronics, Electrical Engineering and Microelectronics, Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, Gliwice, Poland for Statutory Activity. Conflicts of Interest: The authors declare no conflict of interest.

References 1. Liang, C.; Ghazel, M.; Cazier, O.; Bouillaut, L. Advanced Model-Based Risk Reasoning on Automatic Railway Level Crossings. Saf. Sci. 2020, 124, 104592. [CrossRef] 2. Freeman, J.; McMaster, M.; Rakotonirainy, A. An Exploration into Younger and Older Pedestrians’ Risky Behaviours at Train Level Crossings. Safety 2015, 1, 16–27. [CrossRef] 3. Kampczyk, A. An Innovative Approach to Surveying the Geometry of Visibility Triangles at Railway Level Crossings. Sensors 2020, 20, 6623. [CrossRef][PubMed] 4. PN-EN 50129:2019-01—Wersja Angielska. Available online: https://sklep.pkn.pl/pn-en-50129-2019-01e.html (accessed on 19 January 2021). 5. Statystyki. Available online: https://www.bezpieczny-przejazd.pl/o-kampanii/statystyki/ (accessed on 8 January 2021). 6. Jia, X.; Jin, C.; Buzza, M.; Di, Y.; Siegel, D.; Lee, J. A Deviation Based Assessment Methodology for Multiple Machine Health Patterns Classification and Fault Detection. Mech. Syst. Signal Process. 2018, 99, 244–261. [CrossRef] 7. Oztemel, E.; Gursev, S. Literature Review of Industry 4.0 and Related Technologies. J. Intell. Manuf. 2020, 31, 127–182. [CrossRef] 8. Ganesan, S.; David, P.W.; Balachandran, P.K.; Samithas, D. Intelligent Starting Current-Based Fault Identification of an Induction Motor Operating under Various Power Quality Issues. Energies 2021, 14, 304. [CrossRef] 9. Lee, C.-Y.; Cheng, Y.-H. Motor Fault Detection Using Wavelet Transform and Improved PSO-BP Neural Network. Processes 2020, 8, 1322. [CrossRef] 10. Azamfar, M.; Jia, X.; Pandhare, V.; Singh, J.; Davari, H.; Lee, J. Detection and Diagnosis of Bottle Capping Failures Based on Motor Current Signature Analysis. Procedia Manuf. 2019, 34, 840–846. [CrossRef] 11. Grzechca, D.; Zi˛ebi´nski,A.; Rybka, P. Enhanced Reliability of ADAS Sensors Based on the Observation of the Power Supply Current and Neural Network Application. In Computational Collective Intelligence; Nguyen, N.T., Papadopoulos, G.A., J˛edrzejowicz, P., Trawi´nski,B., Vossen, G., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2017; Volume 10449, pp. 215–226, ISBN 978-3-319-67076-8. 12. Kamat, P.; Sugandhi, R. Anomaly Detection for Predictive Maintenance in Industry 4.0—A Survey. E3S Web Conf. 2020, 170, 02007. [CrossRef] 13. Chen, Z.; Xu, K.; Wei, J.; Dong, G. Voltage Fault Detection for Lithium-Ion Battery Pack Using Local Outlier Factor. Measurement 2019, 146, 544–556. [CrossRef] 14. Cheng, Z.; Zou, C.; Dong, J. Outlier Detection Using Isolation Forest and Local Outlier Factor. In Proceedings of the Conference on Research in Adaptive and Convergent Systems, Chongqing, China, 24 September 2019; ACM: New York, NY, USA, 2019; pp. 161–168. 15. Nguyen, T.-B.-T.; Liao, T.-L.; Vu, T.-A. Anomaly Detection Using One-Class SVM for Logs of Juniper Router Devices. In Industrial Networks and Intelligent Systems; Duong, T.Q., Vo, N.-S., Nguyen, L.K., Vien, Q.-T., Nguyen, V.-D., Eds.; Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering; Springer International Publishing: Cham, Switzerland, 2019; Volume 293, pp. 302–312, ISBN 978-3-030-30148-4. 16. Wang, C.; Wang, B.; Liu, H.; Qu, H. Anomaly Detection for Industrial Control System Based on Autoencoder Neural Network. Wirel. Commun. Mob. Comput. 2020, 2020, 1–10. [CrossRef] 17. Hodge, V.J.; Austin, J. An Evaluation of Classification and Outlier Detection Algorithms. arXiv 2018, arXiv:180500811. 18. Domingues, R.; Michiardi, P.; Barlet, J.; Filippone, M. A Comparative Evaluation of Novelty Detection Algorithms for Discrete Sequences. Artif. Intell. Rev. 2020, 53, 3787–3812. [CrossRef] 19. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the-Art in Artificial Neural Network Applications: A Survey. Heliyon 2018, 4, e00938. [CrossRef][PubMed] 20. Shamsaldin, A.; Fattah, P.; Rashid, T.; Al-Salihi, N. A Study of The Convolutional Neural Networks Applications. UKH J. Sci. Eng. 2019, 3, 31–40. [CrossRef] 21. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Comput. 2019, 31, 1235–1270. [CrossRef][PubMed] 22. Borghesi, A.; Bartolini, A.; Lombardi, M.; Milano, M.; Benini, L. Anomaly Detection Using Autoencoders in High Performance Computing Systems. Proc. AAAI Conf. Artif. Intell. 2019, 33, 9428–9433. [CrossRef] 23. Nolle, T.; Luettgen, S.; Seeliger, A.; Mühlhäuser, M. Analyzing Business Process Anomalies Using Autoencoders. Mach. Learn. 2018, 107, 1875–1893. [CrossRef]