On the Dynamics of Gradient Descent for Autoencoders

On the Dynamics of Gradient Descent for Autoencoders

On the Dynamics of Gradient Descent for Autoencoders Thanh V. Nguyen∗, Raymond K. W. Wongy, Chinmay Hegde∗ ∗Iowa State University, yTexas A&M University Abstract A typical approach adopted by this line of work is as follows: assume that the data obeys a ground truth We provide a series of results for unsuper- generative model (induced by simple but reasonably ex- vised learning with autoencoders. Specifically, pressive data-generating distributions), and prove that we study shallow two-layer autoencoder ar- the weights learned by the proposed algorithms (either chitectures with shared weights. We focus on exactly or approximately) recover the parameters of the three generative models for data that are com- generative model. Indeed, such distributional assump- mon in statistical machine learning: (i) the tions are necessary to overcome known NP-hardness mixture-of-gaussians model, (ii) the sparse barriers for learning neural networks [8]. Nevertheless, coding model, and (iii) the sparsity model the majority of these approaches have focused on neural with non-negative coefficients. For each of network architectures for supervised learning, barring these models, we prove that under suitable a few exceptions which we detail below. choices of hyperparameters, architectures, and initialization, autoencoders learned by gradi- 1.2 Our contributions ent descent can successfully recover the pa- rameters of the corresponding model. To our In this paper, we complement this line of work by pro- knowledge, this is the first result that rigor- viding new theoretical results for unsupervised learning ously studies the dynamics of gradient descent using neural networks. Our focus here is on shallow two- for weight-sharing autoencoders. Our analy- layer autoencoder architectures with shared weights. sis can be viewed as theoretical evidence that Conceptually, we build upon previous theoretical re- shallow autoencoder modules indeed can be sults on learning autoencoder networks [9, 10, 11], and used as feature learning mechanisms for a va- we elaborate on the novelty of our work in the discus- riety of data models, and may shed insight on sion on prior work below. how to train larger stacked architectures with Our setting is standard: we assume that the training autoencoders as basic building blocks. data consists of i.i.d. samples from a high-dimensional distribution parameterized by a generative model, and 1 Introduction we train the weights of the autoencoder using ordinary (batch) gradient descent. We consider three families of generative models that are commonly adopted in 1.1 Motivation machine learning: (i) the Gaussian mixture model with Due to the resurgence of neural networks and deep well-separated centers [12]; (ii) the k-sparse model, learning, there has been growing interest in the commu- specified by sparse linear combination of atoms [13]; nity towards a thorough and principled understanding and (iii) the non-negative k-sparse model [11]. While of training neural networks in both theoretical and these models are traditionally studied separately de- algorithmic aspects. This has led to several important pending on the application, all of these model families breakthroughs recently, including provable algorithms can be expressed via a unified, generic form: for learning shallow ( -hidden layer) networks with 1 ∗ nonlinear activations [1, 2, 3, 4], deep networks with y = Ax + η; (1) linear activations [5], and residual networks [6, 7]. which we (loosely) dub as the generative bilinear model. Proceedings of the 22nd International Conference on Ar- In this form, A is a groundtruth n × m-matrix, x∗ is tificial Intelligence and Statistics (AISTATS) 2019, Naha, an m-dimensional latent code vector and η is an inde- Okinawa, Japan. PMLR: Volume 89. Copyright 2019 by pendent n-dimensional random noise vector. Samples the author(s). y’s are what we observe. Different choices of n and m, On the Dynamics of Gradient Descent for Autoencoders as well as different assumptions on A and x∗ lead to The code consistency property is crucial for establishing the three aforementioned generative models. the correctness of gradient descent over the reconstruc- tion loss. This turns out to be rather tedious due to Under these three generative models, and with suit- the weight sharing — a complication which requires able choice of hyper-parameters, initial estimates, and a substantial departure from the existing machinery autoencoder architectures, we rigorously prove that: for analysis of sparse coding algorithms — and indeed forms the bulk of the technical difficulty in our proofs. Two-layer autoencoders, trained with (normal- Nevertheless, we are able to derive explicit linear con- ized) gradient descent over the reconstruction vergence rates for all the generative models listed above. loss, provably learn the parameters of the un- We do not attempt to analyze other training schemes derlying generative bilinear model. (such as stochastic gradient descent or dropout) but anticipate that our analysis may lead to further work To the best of our knowledge, our work is the first to along those directions. analytically characterize the dynamics of gradient de- scent for training two-layer autoencoders. Our analysis can be viewed as theoretical evidence that shallow au- 1.4 Comparison with prior work toencoders can be used as feature learning mechanisms (provided the generative modeling assumptions hold), a Recent advances in algorithmic learning theory has led view that seems to be widely adopted in practice. Our to numerous provably efficient algorithms for learning analysis highlights the following interesting conclusions: Gaussian mixture models, sparse codes, topic models, (i) the activation function of the hidden (encoder) layer and ICA (see [12, 13, 14, 15, 16, 17, 18, 19] and refer- influences the choice of bias; (ii) the bias of each hid- ences therein). We omit a complete treatment of prior den neuron in the encoder plays an important role in work due to space constraints. achieving the convergence of the gradient descent; and We would like to emphasize that we do not propose a (iii) the gradient dynamics depends on the complexity new algorithm or autoencoder architecture, nor are we of the generative model. Further, we speculate that our the first to highlight the applicability of autoencoders analysis may shed insight on practical considerations with the aforementioned generative models. Indeed, for training deeper networks with stacked autoencoder generative models such as k-sparsity models have served layers as building blocks [9]. as the motivation for the development of deep stacked (denoising) autoencoders dating back to the work of [20]. 1.3 Techniques The paper [9] proves that stacked weight-sharing au- toencoders can recover the parameters of sparsity-based Our analysis is built upon recent algorithmic devel- generative models, but their analysis succeeds only for opments in the sparse coding literature [14, 15, 16]. certain generative models whose parameters are them- Sparse coding corresponds to the setting where the syn- selves randomly sampled from certain distributions. ∗(i) thesis coefficient vector x in (1) for each data sample In contrast, our analysis holds for a broader class of (i) ∗(i) y is assumed to be k-sparse, i.e., x only has at networks; we make no randomness assumptions on the most k m non-zero elements. The exact algorithms parameters of the generative models themselves. proposed in these papers are all quite different, but at a high level, all these methods involve establishing a More recently, autoencoders have been shown to learn notion that we dub as “support consistency”. Broadly sparse representations [21]. The recent paper [11] speaking, for a given data sample y(i) = Ax∗(i) + η(i), demonstrates that under the sparse generative model, the idea is that when the parameter estimates are close the standard squared-error reconstruction loss of ReLU to the ground truth, it is possible to accurately esti- autoencoders exhibits (with asymptotically many sam- mate the true support of the synthesis vector x∗(i) for ples) critical points in a neighborhood of the ground each data sample y(i). truth dictionary. However, they do not analyze gradi- ent dynamics, nor do they establish convergence rates. We extend this to a broader family of generative mod- We complete this line of work by proving explicitly that els to form a notion that we call “code consistency”. gradient descent (with column-wise normalization) in We prove that if initialized appropriately, the weights the asymptotic limit exhibits linear convergence up to of the hidden (encoder) layer of the autoencoder pro- a radius around the ground truth parameters. vides useful information about the sign pattern of the corresponding synthesis vectors for every data sam- ple. Somewhat surprisingly, the choice of activation 2 Preliminaries function of each neuron in the hidden layer plays an m important role in establishing code consistency and Notation Denote by xS the sub-vector of x 2 R affects the possible choices of bias. indexed by the elements of S ⊆ [m]. Similarly, let WS Thanh V. Nguyen∗, Raymond K. W. Wongy, Chinmay Hegde∗ be the sub-matrix of W 2 Rn×m with columns indexed by elements in S. Also, define supp(x) fi 2 [m]: Input Hidden Output , layer layer layer xi =6 0g as the support of x, sgn(x) as the element-wise y1 yˆ sign of x and 1E as the indicator of an event E. 1 We adopt standard asymptotic notations: let f(n) = y2 yˆ2 O(g(n)) (or f(n) = Ω(g(n))) if there exists some con- . stant C > 0 such that jf(n)j ≤ Cjg(n)j (respectively, . jf(n)j ≥ Cjg(n)j). Next, f(n) = Θ(g(n)) is equivalent y to that f(n) = O(g(n)) and f(n) = Ω(g(n)). Also, n yˆn f(n) = !(g(n)) if limn!1 jf(n)=g(n)j = 1. In ad- dition, g(n) = O∗(f(n)) indicates jg(n)j ≤ Kjf(n)j Figure 1: Architecture of a shallow 2-layer autoencoder for some small enough constant K.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    19 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us