Semi-Supervised Learning of Bearing Anomaly Detection Via Deep Variational Autoencoders

Semi-Supervised Learning of Bearing Anomaly Detection Via Deep Variational Autoencoders

Semi-Supervised Learning of Bearing Anomaly Detection via Deep Variational Autoencoders Shen Zhang 1 Fei Ye 2 Bingnan Wang 3 Thomas G. Habetler 1 Abstract perature, high moisture, and overload, which can eventually result in motor malfunctions that lead to high maintenance Most of the data-driven approaches applied to costs, severe financial losses, and various safety hazards. bearing fault diagnosis up to date are established Among the many types of electric machine failures, it is in the supervised learning paradigm, which usu- revealed that the bearing fault is the most common fault ally requires a large set of labeled data collected type that accounts for 30% to 40% of all the machine fail- a priori. In practical applications, however, ob- ures, according to several surveys conducted by the IEEE taining accurate labels based on real-time bear- Industry Application Society (IEEE-IAS) (IEEE, 1985) and ing conditions can be far more challenging than the Japan Electrical Manufacturers’ Association (JEMA) simply collecting a huge amount of unlabeled (JEMA, 2000) data using various sensors. In this paper, we thus propose a semi-supervised learning approach for In the last few years, many data-driven approaches have bearing anomaly detection using variational au- been applied to enhance the accuracy and reliability of bear- toencoder (VAE) based deep generative models, ing fault diagnosis (Zhang et al., 2019; Saufi et al., 2019). which allows for effective utilization of dataset Specifically, many time-series signals used for monitoring when only a small subset of data have labels. Fi- the bearing condition have high-dimensional feature spaces, nally, a series of experiments is performed us- which makes it ideal to exploit deep learning algorithms to ing both the Case Western Reserve University perform feature extraction and classification. While many (CWRU) bearing dataset and the University of models have presented satisfactory results after proper train- Cincinnati’s Center for Intelligent Maintenance ing sessions, most of them are established in the supervised Systems (IMS) dataset. The experimental results learning paradigm, which requires a large set of labeled data demonstrate that the proposed semi-supervised collected a priori under all possible fault conditions. learning scheme greatly outperforms two main- For bearing anomaly detection, however, it is often times stream semi-supervised learning approaches and a much easier to collect a large amount of data than to ac- baseline supervised convolutional neural network curately obtain their corresponding labels, and this is espe- approach, with the overall accuracy improvement cially the case for faults that evolved naturally over time. ranging between 3% to 30% using different pro- For instance, even though an electromechanical system will portion of labeled samples. be eventually turned off due to the consequences of a fault, it is not easy to determine precisely when the first trace of such a fault actually shows up and how long it lasts at the arXiv:1912.01096v2 [cs.LG] 9 Dec 2019 1. Introduction incipient stage. Additionally, it may also suffer from data ambiguity issues due to challenges associated with properly Electric machines are widely employed in a variety of indus- labeling these data, especially at a transition stage when they try applications and electrified transportation systems, and exhibit early signs of fault features but are far from obvious they consume approximately 60% of all electric power pro- when compared to those features of a fully developed fault. duced. However, on certain occasions these machines may Therefore, for accuracy concerns, only the very beginning operate at unfavorable conditions, such as high ambient tem- and the very end of the collected data can be confidently assigned to their corresponding labels, leaving many of the 1School of Electrical and Computer Engineering, Georgia In- intermediate data unlabeled, which apparently cannot be 2 stitute of Technology, Atlanta GA, USA California PATH, Uni- exploited using supervised learning. versity of California, Berkeley, Berkeley CA, USA 3Mitsubishi Electric Research Laboratories, Cambridge MA, USA. Correspon- To train a bearing fault classifier with a good level of gen- dence to: Shen Zhang <[email protected]>. eralization using only a small subset of labeled data, one Preprint. Under Review. Work in Progress. approach is to apply specific algorithms that can make the Semi-Supervised Learning of Bearing Anomaly Detection via Deep Variational Autoencoders most out of the labeled data and computing resources avail- able. For example, data augmentation techniques such as generative adversarial networks (GAN) (Goodfellow et al., 2014), have already applied in (Liu et al., 2018) and (Ver- straete et al., 2019), as well as some cross validation meth- ods such as leave-one-out and Monte Carlo. Additionally, another promising route is to apply the semi-supervised learning paradigm to leverage the manually labeled data and the massive unlabeled data. Figure 1. Architecture of VAE. The prior of z is regarded as part Semi-supervised learning specifically considers the prob- of the generative model (solid lines), thus the whole generative lem of classification when only a small subset of data have model is denoted as pθ(x; z) = pθ(xjz) pθ(z). The approximated corresponding labels, and as of now only a handful of semi- posterior (dashed lines) is denoted as qφ(zjx). supervised learning paradigms have been applied to bearing anomaly detection. For example, a support vector data description (SVDD) model proposed in (Liu and Gryllias, based on the labelled data alone. Therefore, we adopt the 2019) utilizes cyclic spectral coherence domain indicators to semi-supervised learning algorithm as the deep generative build the feature space and fits a hyper-sphere, which later model, which is built upon solid Bayesian theory and scal- calculate the Euclidean distances in order to isolate data able variational inference. Although some prior work em- from healthy and faulty conditions. Moreover, graph-based ploying VAE can be found in (San Martin et al., 2018) and methods are employed in (Chen et al., 2018) and (Zhao (Ellefsen et al., 2019), they are only utilizing the discrimina- et al., 2017) to construct a graph connecting similar samples tive feature in the latent space for dimension reduction and in the dataset, so the class labels can propagate through the use these features to train other external classifiers, while graph from labelled to unlabelled nodes. However, these in this work we also make use of the generative function approaches are sensitive to the graph structure and require of VAE and train itself as a classifier through an integrated eigen-analysis of the graph Laplacian, which limits the scale approach. to which these methods can be applied. Instead of graph- based methods, (Razavi-Far et al., 2019a) captures the struc- ture of the data using α-shape, which is mostly utilized for 2. Background of Variational Autoencoders the surface estimation that reduces parameter tuning. Variational inference techniques (Kingma and Welling, Additionally, the recent semi-supervised deep ladder net- 2013) are often adopted in the process of training and pre- work (Rasmus et al., 2015) is implemented in (Razavi-Far diction, which are efficient methods to estimate posteriors et al., 2019b) to identify one-stage parallel shaft helical gear of the distributions derived by neural networks. The archi- faults in induction machine systems. The ladder network is tecture of VAE is shown in Fig1, which is formulated in a accomplished through the integration of the supervised and probabilistic perspective that specifies a joint distribution unsupervised learning strategies by modeling hierarchical la- pθ(x; z) over the observed variables x and latent variables tent variables. However, the unsupervised component of the z, such that pθ(x; z) = pθ(xjz)p(z). Latent variables are ladder network may not be helpful in the semi-supervised drawn from a prior density p(z) that is usually a multivariate setting if the raw data do not exhibit significant clustering unit Gaussian N (0; I), and these local latent variables z are on the 2D manifold, which is often not the case for bearing related to their corresponding observations x through the vibration signals. While GANs can be also implemented for likelihood pθ(xjz), which can be viewed as a probabilistic semi-supervised learning, it has been reported in (Dai et al., decoder (generator) to decode z into a distribution over the 2017) that a good semi-supervised classifier and a good gen- observation x. The exact form of decoder pθ(xjz) can be erator cannot be obtained at the same time. In other words, modeled as a neural network with parameters θ consisting good semi-supervised learning actually requires a bad gen- of the weights W and biases b of this network. erator, and the well-known difficulties to train GANs further Having specified the decoding process, it is also desirable adds to the uncertainty of applying them to semi-supervised to perform inference, or to compute the posterior pθ(zjx) learning tasks. of latent variables z given the observed variables x. Ad- The motivation for the proposed research is both broad and ditionally, we also wish to optimize the model parameters specific, as we seek to approach the problem of bearing θ with respect to pθ(x) by marginalizing out the latent anomaly detection with solid theoretical explanation, and variables z in pθ(x; z). However, since the true posterior leverage properties of both the labeled and unlabeled data pθ(zjx) is analytically intractable due to the Gaussian non- to make the classifier performance more accurate than that conjugate prior p(z), then the variational inference tech- nique is used to find an approximate posterior qφ(zjx) Semi-Supervised Learning of Bearing Anomaly Detection via Deep Variational Autoencoders MITSUBISHI ELECTRIC RESEARCH LABORATORIES with optimized variational parameters φ that minimizes Classifier its Kullback-Leibler (KL) divergence to the true posterior (add references here).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us