
A Classification Supervised Auto-Encoder Based on Predefined Evenly-Distributed Class Centroids Qiuyu Zhu a, Ruixin Zhang a a School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China Abstract Classic variational autoencoders are used to learn complex data distributions, that are built on standard function approximators. Especially, VAE has shown promise on a lot of complex task. In this paper, a new autoencoder model - classification supervised autoencoder (CSAE) based on predefined evenly-distributed class centroids (PEDCC) is proposed. Our method uses PEDCC of latent variables to train the network to ensure the maximization of inter-class distance and the minimization of inner-class distance. Instead of learning mean/variance of latent variables distribution and taking reparameterization of VAE, latent variables of CSAE are directly used to classify and as input of decoder. In addition, a new loss function is proposed to combine the loss function of classification. Based on the basic structure of the universal autoencoder, we realized the comprehensive optimal results of encoding, decoding, classification, and good model generalization performance at the same time. Theoretical advantages are reflected in experimental results. 1. Introduction Over the past few years, variational autoencoder have demonstrated their effectiveness in many areas, such as unsupervised learning [1], supervised learning [2], semi-supervised learning [3]. The theory of variational autoencoder is from the perspective of Bayesian Theorem, the posterior distribution of the latent variables z conditioned on the data x, is approximated by a normal distribution, who’s mean and variance are the output of a neural network. To make the generated sample x* very similar to data x, VAE adds Kullback-Leibler divergence to the loss function, where the latent variables z mapped from the data x corresponds to encoder, and the sample x* generated from the distribution of latent variables z corresponds to decoder. The complex mapping function is learned by neural network, as the neural network can approximate all functions theoretically. Taking the classification as an example, we want to use high dimensional hidden variables to represent the input data. In the high-dimensional space, the ultimate objective of classification is making inter-class distance to be as large as possible, that is, the sample is more separable, and the inner-distance within the same class is as small as possible. In other words, the distribution of each class is more aggregated. For the distribution of the original data, in the case of MNIST [11], the distribution of the number "9" is closer to the number "4" and the number "7" [2, 22], that's the reason why CVAE mix them sometimes. A lot of work has been done to prove the effectiveness of codec structure. Based on the basic structure of autoencoder and predefined evenly-distributed class centroids (PEDCC), a new supervised learning method for autoencoder is proposed in this paper. By PEDCC that meets the criteria of maximized inter-class distance (the distance between classes is the furthest), we map the label of input data x to different predefined class centroids, then let the encoder network learn the mapping function. Through the joint training, the network mapping makes the latent features of the same class samples as close as possible to predefined class centroids, finally to get a good classification. As far as we know, prior to this article, there was no method of using predefined class centroids to 1 train automatic encoders. We use the output of the encoder as input to the decoder directly, and to make the autoencoder input and output as close as possible, where the mean square error (MSE) loss function is adopted. Because of resampling, the image quality generated by VAE generally has the problem of edge blurring. To solve this problem, we added the wavelets loss, that is, a wavelets transform is taken on the input image and the output of the autoencoder respectively, whose difference is taken as a new loss function to improve the edge quality of the generated image. This is an additional constraint to the edge difference of the input and generated image, which is more conducive to improving the subjective quality of the image. To further improve the subjective quality of the generated image, we draw on the idea of reparameterization in VAE, to add Gaussian noise to the latent features to reconstruct the input of the decoder in training phase. The experiment results prove that this trick is very effective. Our main contributions are as follows: 1) The PEDCC is proposed to meet the criteria of maximized inter-class distance, so that convolutional neural networks can focus more on learning more compact intra-class distances. 2) By PEDCC, our method combines classification and autoencoder, latent variables can be used both for classification and reconstruction, and add noise during training to improve the accuracy of classification and image quality simultaneously. 3) To further improve image quality, wavelets loss function is proposed. By combining traditional pattern recognition methods, a constraint is placed on both the high-frequency and low-frequency information of the image, which also good for classification and reconstruction performance. Below, we first introduce some of the previous work in Section 2. In section 3 our approach is described in detail. Then in section 4 we will verify the validity of our method through the experimental results on different datasets. Finally, in section 5, we discuss some of the issues that still exist and what will be done in the future. 2. Related work The autoencoder is an unsupervised learning algorithm, which is mainly used for dimension reduction or feature extraction. It also can be used in deep learning to initialize weight before training phase. Depending on different application scenarios, autoencoders can be divided into sparse autoencoder [13,14], which is add L1 regularization to the basic autoencoder to make feature sparse, denoising autoencoder [15,16], which is designed to prevent the overfitting problem and add noise to the input data, to enhance the generalization ability of the model and variational autoencoder [1,18], which learn the distribution of raw data by setting the distribution of latent variables as (0,I), and can in turn produce some data similar to the original data. Normally, the variational autoencoder is used in unsupervised learning, so we cannot control the generation of the decoder. What’s more, the conditional variational autoencoder (CVAE) [19,20] combines the variational autoencoder with supervised information, which allows us to control the generation of decoder. Ref. [2] assumes that class labels are independent of latent features so that they are stitched together directly to generate data. Ref. [24] controls the generation of latent variables through the labels of the face properties, and then generates data by sampling directly from the distribution of latent variables. Ref. [19] considers the prediction problem directly, with the label y as the data to be generated, and data x as a label, to achieve the purpose of predicting the label y of data x. [28,29] train generator directly, the latent variables z is also trained as the network parameters. Therefore, they only can generate images, without the function of feature extraction and classification. 2 The above works only learn the data distribution and complete codec work, without classification function. In recent years, some scholars use VAE in the field of incremental learning, which is mainly focus on how to alleviate catastrophic forgetting [25]. Ref. [21] adds a classification layer to the VAE structure, and use dual network structure, composed of "teacher-student" model to mitigate the problem of catastrophic forgetting. To achieve the classification function, based on CVAE, Ref. [22] adds an additional classification network for joint training. Usually, to add a classification function to an autoencoder, an additional network structure is necessary. Through predefined evenly-distributed class centroids, CSAE proposed in this paper maps training labels to these class centers and can use latent variables to classify directly. In view of this new framework, a new loss function is also proposed for training. 3. Method Convolution Block Linear Layer Deconvolution Block NCM Classifier Based on PEDCC Encoder Loss1 Decoder Wavelets Loss2 Wavelets Fig.1. The structure of CSAE In this section we will introduce the details of CSAE and show how to combine them to form an end-to-end learning system. Section 3.1 gives a description of the PEDCC. Section 3.2 describes the loss function, and section 3.3 discusses the network structure. 3.1 Predefined Evenly-Distributed Class Centroids From the traditional view of statistical pattern recognition, the main objective of dimension reduction is to generate low-dimensional expressions with maximum inter-class distance and minimum inner-class distance, such as LDA algorithm. For deep learning classification model, the last Softmax is a linear classifier, while the preceding multilayer network is a mapping of dimension reduction, to generate low-dimensional latent variables. If the generated sample latent variables have the characteristics of small inner-class and large inter-class distance, neural networks will learn better features. Ref. [4] and [5], modify the Softmax function of cross-entropy loss in convolution neural networks (CNNs) to improve classification performance. Because of strong learning abilities, it is not difficult for neural networks to obtain a good aggregation within the same class. However, maximizing the inter-class distance is a difficult problem, it varies in different classification tasks. If the variance within the class is large and the inter-class distance is close, there will be overlaps between different classes, which can lead to wrong classification. There is no good way to avoid this problem. In this work, the class center of latent variables is artificially set by the method of PEDCC to make sure the distance of these clustering centers is furthest.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-