Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-Scale Image Classification

Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-Scale Image Classification

Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification Yuting Zhang [email protected] Kibok Lee [email protected] Honglak Lee [email protected] Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA Abstract cent et al., 2010), deep belief networks (Hinton et al., 2006; Unsupervised learning and supervised learning Lee et al., 2009), sparse encoder-decoders (Ranzato et al., are key research topics in deep learning. How- 2007; Kavukcuoglu et al., 2010), and deep Boltzmann ma- ever, as high-capacity supervised neural net- chines (Salakhutdinov & Hinton, 2009). These approaches works trained with a large amount of labels have significantly improved the performance of neural networks achieved remarkable success in many computer on supervised tasks when the amount of available labels vision tasks, the availability of large-scale la- were not large. beled images reduced the significance of un- However, over the past few years, supervised learning with- supervised learning. Inspired by the recent out any unsupervised pretraining has achieved even better trend toward revisiting the importance of un- performance, and it has become the dominating approach supervised learning, we investigate joint super- to train deep neural networks for real-world tasks, such vised and unsupervised learning in a large-scale as image classification (Krizhevsky et al., 2012) and ob- setting by augmenting existing neural networks ject detection (Girshick et al., 2016). Purely supervised with decoding pathways for reconstruction. First, learning allowed more flexibility of network architectures, we demonstrate that the intermediate activations e.g., the inception unit (Szegedy et al., 2015) and the resid- of pretrained large-scale classification networks ual structure (He et al., 2016), which were not limited by preserve almost all the information of input im- the modeling assumptions of unsupervised methods. Fur- ages except a portion of local spatial details. thermore, the recently developed batch normalization (BN) Then, by end-to-end training of the entire aug- method (Ioffe & Szegedy, 2015) has made the neural net- mented architecture with the reconstructive ob- work learning further easier. As a result, the once popu- jective, we show improvement of the network lar framework of unsupervised pretraining has become less performance for supervised tasks. We evalu- significant and even overshadowed (LeCun et al., 2015) in ate several variants of autoencoders, including the field. the recently proposed “what-where" autoencoder that uses the encoder pooling switches, to study Several attempts (e.g., Ranzato & Szummer (2008); the importance of the architecture design. Tak- Larochelle & Bengio (2008); Sohn et al. (2013); Goodfel- ing the 16-layer VGGNet trained under the Ima- low et al. (2013)) had been made to couple the unsuper- geNet ILSVRC 2012 protocol as a strong base- vised and supervised learning in the same phase, making arXiv:1606.06582v1 [cs.LG] 21 Jun 2016 line for image classification, our methods im- unsupervised objectives able to impact the network train- prove the validation-set accuracy by a noticeable ing after supervised learning took place. These methods margin. unleashed new potential of unsupervised learning, but they have not yet been shown to scale to large amounts of la- beled and unlabeled data. Rasmus et al. (2015) recently 1. Introduction proposed an architecture that is easy to couple with a clas- sification network by extending the stacked denoising au- Unsupervised and supervised learning have been two asso- toencoder with lateral connections, i.e., from encoder to ciated key topics in deep learning. One important appli- the same stages of the decoder, and their methods showed cation of deep unsupervised learning over the past decade promising semi-supervised learning results. Nonetheless, was to pretrain a deep neural network, which was then the existing validations (Rasmus et al., 2015; Pezeshki finetuned with supervised tasks (such as classification). et al., 2016) were mostly on small-scale datasets like Many deep unsupervised models were proposed, such as MNIST. Recently, Zhao et al. (2015) proposed the “what- stacked (denoising) autoencoders (Bengio et al., 2007; Vin- Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification where” autoencoder (SWWAE) by extending the stacked • We show that the feature representations learned by convolutional autoencoder using Zeiler et al. (2011)’s “un- high-capacity neural networks preserve the input in- pooling” operator, which recovers the locational details formation extremely well, despite the spatial invari- (which was lost due to max-pooling) using the pooling ance induced by pooling. Our models can perform switches from the encoder. While achieving promising re- high-quality image reconstruction (i.e., “inversion”) sults on the CIFAR dataset with extended unlabeled data from intermediate activations with the unpooling op- (Torralba et al., 2008), SWWAE has not been demonstrated erator using the known switches from the encoder. effective for larger-scale supervised tasks. In this paper, inspired by the recent trend toward simulta- • We successfully improve the large-scale image classi- neous supervised and unsupervised neural network learn- fication performance of a state-of-the-art classification ing, we augment challenge-winning neural networks with network by finetuning the augmented network with a decoding pathways for reconstruction, demonstrating the reconstructive decoding pathway to make its interme- feasibility of improving high-capacity networks for large- diate activations preserve the input information better. scale image classification. Specifically, we take a segment of the classification network as the encoder and use the mir- • We study several variants of the resultant autoen- rored architecture as the decoding pathway to build several coder architecture, including instances of SWWAE autoencoder variants. The autoencoder framework is easy and more basic versions of autoencoders, and provide to construct by augmenting an existing network without insight on the importance of the pooling switches and involving complicated components. Decoding pathways the layer-wise reconstruction loss. can be trained either separately from or together with the encoding/classification pathway by the standard stochas- 2. Related work tic gradient descent methods without special tricks, such as noise injection and activation normalization. In terms of using image reconstruction to improve clas- sification, our work is related to supervised sparse cod- This paper first investigates reconstruction properties of the ing and dictionary learning work, which is known to ex- large-scale deep neural networks. Inspired by Dosovitskiy tract sparse local features from image patches by sparsity- & Brox (2016), we use the auxiliary decoding pathway of constrained reconstruction loss functions. The extracted the stacked autoencoder to reconstruct images from inter- sparse features are then used for classification purposes. mediate activations of the pretrained classification network. Mairal et al. (2009) proposed to combine the reconstruction Using SWWAE, we demonstrate better image reconstruc- loss of sparse coding and the classification loss of sparse tion qualities compared to the autoencoder using the un- features in a unified objective function. Yang et al. (2010) pooling operators with fixed switches, which upsamples an extended this supervised sparse coding with max-pooling activation to a fixed location within the kernel. This re- to obtain translation-invariant local features. sult suggests that the intermediate (even high-level) feature representations preserve nearly all the information of the Zeiler et al. (2010) proposed deconvolutional networks for input images except for the locational details “neutralized” unsupervised feature learning that consist of multiple lay- by max-pooling layers. ers of convolutional sparse coding with max-pooling. Each layer is trained to reconstruct the output of the previous Based on the above observations, we further improve the layer. Zeiler et al. (2011) further introduced the “unpooling quality of reconstruction, an indication of the mutual infor- with switches” layer to deconvolutional networks to enable mation between the input and the feature representations end-to-end training. (Vincent et al., 2010), by finetuning the entire augmented architecture with supervised and unsupervised objectives. As an alternative to sparse coding and discriminative con- In this setting, the image reconstruction loss can also im- volutional networks, autoencoders (Bengio, 2009) are an- pact the classification pathway. To the contrary of conven- other class of models for representation learning, in partic- tional beliefs in the field, we demonstrate that the unsuper- ular for the non-linear principal component analysis (Dong vised learning objective posed by the auxiliary autoencoder & McAvoy, 1996; Scholz & Vigário, 2002) by minimiz- is an effective way to help the classification network obtain ing the reconstruction errors of a bottlenecked neural net- better local optimal solutions for supervised tasks. To the work. The stacked autoencoder (SAE) (Bengio et al., 2007) best of our knowledge, this work is the first to show that un- is amenable for hierarchical representation learning. With supervised objective can improve the image classification pooling-induced

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    17 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us