Unsupervised Representation Learning Using Convolutional and Stacked Auto-Encoders: a Domain and Cross-Domain Feature Space Analysis

Unsupervised Representation Learning Using Convolutional and Stacked Auto-Encoders: a Domain and Cross-Domain Feature Space Analysis

Unsupervised representation learning using convolutional and stacked auto-encoders: a domain and cross-domain feature space analysis Gabriel B. Cavallari, Leonardo S. F. Ribeiro, Moacir A. Ponti Institute of Mathematical and Computer Sciences (ICMC) University of Sao˜ Paulo (USP), Sao˜ Carlos, SP, Brazil Email: [gabriel.cavallari,leonardo.sampaio.ribeiro,ponti]@usp.br Abstract—A feature learning task involves training models that those methods are capable of fitting virtually any labelled are capable of inferring good representations (transformations dataset, even pure noise sets, highlighting a known issue with of the original space) from input data alone. When working CNNs [7]. The generality of feature spaces is then put into with limited or unlabelled data, and also when multiple visual domains are considered, methods that rely on large annotated question, since small variations in the test set can lead to a datasets, such as Convolutional Neural Networks (CNNs), cannot significant decrease in testing accuracy [8]. This limitation gets be employed. In this paper we investigate different auto-encoder worse when some models with enough capacity are able to (AE) architectures, which require no labels, and explore training memorise a dataset, resulting in over-training [9] and poor strategies to learn representations from images. The models real-life performance. are evaluated considering both the reconstruction error of the images and the feature spaces in terms of their discriminative Given the aforementioned drawbacks, CNNs are not ade- power. We study the role of dense and convolutional layers on quate in two scenarios: a) limited or unlabelled data, and b) the results, as well as the depth and capacity of the networks, when multiple visual domains are considered. In the first case, since those are shown to affect both the dimensionality reduction it is either the event that the dataset is not large enough to allow and the capability of generalising for different visual domains. learning or that the data is not labelled at all, preventing the Classification results with AE features were as discriminative as pre-trained CNN features. Our findings can be used as guidelines use of any classification-based training. In the second instance, for the design of unsupervised representation learning methods a convolutional network trained on a given dataset is well within and across domains. fitted to represent it, while the learned feature space may not generalise well for other datasets, especially if it comes from I. INTRODUCTION a different visual domain. Feature learning is a sub-field of machine learning where In this paper we explore stacked and convolutional auto- trainable models should be capable of inferring good repre- encoders as alternative methods for unsupervised feature learn- sentations (transformations of the original space) from input ing. Auto-encoders (AEs) are methods that encode some input data alone. Instead of designing hand-crafted methods to be into a representation with low dimensionality, known as code, used in general-purpose scenarios, a model is trained using and then, via a decoding module, reconstruct this compact some dataset so that it learns parameters that are adequate for representation to match as best as possible the original in- the data. Deep Learning methods were shown to be effective put [2] and can be useful in many scenarios, in particular for the purpose of feature learning, which most recently was for signal, image and video [10], [11] applications. The space defined as representation learning [1]. formed by the transformation of the input into the code is often For image data, Convolutional Neural Networks (CNNs) called latent space. Encoders and decoders are often linear arXiv:1811.00473v1 [cs.CV] 1 Nov 2018 with multiple layers were found to be particularly adequate. transformations that can be implemented using a dense layer After being trained for image classification tasks, those net- of a neural network in an unsupervised way [12]. Stacking work models were shown to be good extractors of low-level those layers may lead to deep stacked auto-encoders that carry (shapes, colour blobs and edges) at the initial layers, and high- some of the interesting properties of deep models [13]. level features (textures and semantics) at deeper layers [2]. Based on the idea of self-taught learning that aims to However, deep networks are difficult to train from scratch, extract relevant features from some input data with little or requiring a large number of annotated examples in order to no annotation [14], we investigate different AE architectures ensure learning, due to their high shattering coefficient [3]. In and training strategies to learn representations from images the simplest form of feature extraction that require no labels, and analyse the feature space in terms of their discriminative off-the-shelf pre-trained CNN models already present a good capacity. discriminative capacity [4], [5]. CNNs can also be used in a Although AEs are known to allow unsupervised learning for triplet fashion in order to produce a feature embedding for a specific dataset, usually requiring less examples to converge multiple visual domains provided a sufficiently large training when compared to CNNs, the ability of those methods to set [6]. If given enough network capacity, and enough data, be able to learn latent spaces that can be generalised to other domains, as well as their comparison with features in a space called latent space. Then, the decoder is responsible extracted from pre-trained CNNs is still to be investigated. for reconstruct the original input from the code. We illustrate In this paper we explore two datasets, MNIST [15] and this concept in Figure 1, where the input is an image of a Fashion [16] in order to perform controlled experiments that hand-drawn digit. allow understanding of the potential different AE architectures have in obtaining features that are discriminative not only on x g(z) =x ^ the domain of the training dataset, but for other datasets in a f(x) = z cross-domain setting. Note we assume no labels are available for training, hence the unsupervised representation learning. Encoder Code Decoder Features obtained with several AE architectures are compared with features extracted from a pre-trained CNN. Fig. 1. General structure of AEs. II. RELATED WORK AND CONTRIBUTIONS While Unsupervised Representation Learning is a well- By restraining the code to be a compact representation studied topic in the broad field of machine learning and of the input data, the AE is forced to learn an encoding image understanding [14], not much work has been done transformation that contains the most valuable information towards the analysis of those feature spaces when working about the structure data so that the decoding part is able with cross-domain models. The problem we want to tackle to perform well in the reconstruction task. The AE cannot by studying the feature space in a cross-domain scenario can however learn to literally copy its input, requiring restrictions be defined as a form of Transfer Learning task [17], where to be in place. one wants knowledge from one known domain to transfer and Latent spaces of so-called undercomplete AEs have lower consequently improve learning tasks within a different domain. dimensionality than the original input, meaning that its code One of the few studies that leverage the use of auto- cannot hold a complete copy of the input data and enforcing encoders in a cross-domain adaptation problem comes from Li that the model should learn how to represent the same data et al [18]; the authors solved their low sample number problem with fewer dimensions. The loss function often used to train by training an auto-encoder on an unrelated dataset with many an AE is the squared error function: samples and using the learned model to extract local features L(x; g(f(x))) = jjx − g(f(x))jj2 (1) on their target domain; finally, they concatenated those features with a new set learned through usual CNN classifier setup where L is a loss function (e.g. mean squared error), x is on the desired domain. This study showed the potential for an input sample, f represents the encoder, g represents the transfer learning with AE architectures but did not experiment decoder, z = f(x) is the code generated by the encoder and with an AE-only design for their models, an application we x^ = g(f(x)) is the reconstructed input data. If functions f(:) address in this paper. and g(:) are linear they can be written in the form: Consequently, our contribution includes the following: (i) exploring different AE architectures including dense and con- f(x) = Wex + be = z volutional layers, as well as independent and tied-weights g(z) = Wdz + bd =x; ^ training settings (ii) a detailed study using both the reconstruc- tion loss function and the discriminative capability of features were We is a weight matrix for the encoder, be is a vector extracted using AEs; (iii) an analysis of the feature space on of bias terms for the encoder, while the Wd and bd are, a more extreme cross-domain scenario, in which the images respectively the weight matrix and bias vector for the decoder. t classes are disjoint and with dissimilar visual appearance; We say the AE has tied weights when Wd = We , i.e. the (iv) comparison of the discriminative capability of features AE tries to learn a single weight matrix that is assumed to obtained from AEs and a pre-trained CNN. have an inverse transformation given by its transpose. To our knowledge, this study is the first that analyses In summary, we say that an AE generalises well when it feature spaces of AEs providing guidelines for future work understands the data-generating distribution – i.e.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us