A Unified Energy-Based Framework for Unsupervised Learning

A Unified Energy-Based Framework for Unsupervised Learning

A Unified Energy-Based Framework for Unsupervised Learning Marc’Aurelio Ranzato Y-Lan Boureau Sumit Chopra Yann LeCun Courant Insitute of Mathematical Sciences New York University, New York, NY 10003 Abstract to be reconstructed. For example in clustering meth- ods such as K-Means, the code is the index of the prototype in the codebook that is closest to the data We introduce a view of unsupervised learn- ing that integrates probabilistic and non- vector. Similarly in Principal Component Analysis, probabilistic methods for clustering, dimen- the code is the projection of the data vector on a lin- ear subspace. In auto-encoder neural nets, the code sionality reduction, and feature extraction in a unified framework. In this framework, an is the state of a hidden layer with a small number of units, from which a vector on a low-dimensional non- energy function associates low energies to in- linear manifold is reconstructed. In Restricted Boltz- put points that are similar to training sam- ples, and high energies to unobserved points. mann Machines [4], the code is a vector of stochastic binary variables from which the input can be recon- Learning consists in minimizing the energies of training samples while ensuring that the structed. Finally, in sparse-overcomplete representa- energies of unobserved ones are higher. Some tions methods such as Non-negative Matrix Factor- ization [2], Olshausen and Field’s sparse overcomplete traditional methods construct the architec- ture so that only a small number of points coding method [7], or the Energy-Based Model for learning sparse over-complete features [11], the code can have low energy, while other methods is a high-dimensional vector with most of the compo- explicitly “pull up” on the energies of unob- served points. In probabilistic methods the nents at zero. In general, these models are trained in such a way that training data can be accurately recon- energy of unobserved points is pulled by min- imizing the log partition function, an expen- structed from the codes, while unobserved data vectors sive, and sometimes intractable process. We cannot. By contrast, most density estimation meth- ods (e.g. Gaussian Mixture Models) or the Product explore different and more efficient methods using an energy-based approach. In particu- of Experts methods [3] do not produce explicit codes from which the data can be reconstructed, but merely lar, we show that a simple solution is to re- compute a function (the negative log probability) that strict the amount of information contained in codes that represent the data. We demon- produces low values around training data, and high values everywhere else. strate such a method by training it on natu- ral image patches and by applying to image Unsupervised methods appear very diverse, and based denoising. on very different principles. One of the main objec- tives of this paper is to show that most unsupervised learning methods are based on a common underlying 1 Introduction principle. At a high level, many unsupervised mod- els can be viewed as a scalar-valued energy function The main goal of unsupervised learning is to capture E(Y ) that operates on input data vectors Y . The regularities in data for the purpose of extracting use- function E(Y ) is designed to produce low energy val- ful representations or for restoring corrupted data. For ues when Y is similar to some training data vectors, example, probabilistic density models can be used for and high energy values when Y is dissimilar to any deriving efficient codes (for compression), or for restor- training data vector. The energy function is subject ing corrupted data (e.g. denoising images). Many un- to learning. Training an unsupervised model consists supervised methods explicitly produce representations in searching for an energy function within a family or codes, or feature vectors, from which the data is {E(Y, W ), W ∈ W} indexed by a parameter W , that gives low energy values on input points that are similar The gradient of L(W, T ) with respect to W is: to the training samples, and large values on dissimilar p points. ∂L(W, T ) 1 ∂E(Y i, W ) ∂E(y, W ) = − P (y, W ) We argue that the various unsupervised methods ∂W p ∂W Z ∂W Xi=1 y merely differ on three points: how E(Y, W ) is pa- (3) rameterized, how the energy of observed points is In other words, minimizing the first term in eq. 2 with made small, and how the energy of unobserved points respect to W has the effect of making the energy of is made large. Common probabilistic and non- observed data points as small as possible, while min- probabilistic methods use particular combinations of imizing the second term (the log partition function) model architecture and loss functions to achieve this. has the effect of “pulling up” on the energy of un- This paper discusses which combinations of architec- observed data points to make it as high as possible, tures and loss functions are allowed, which combi- particularly if their energy is low (their probability nations are efficient, and which combinations do not under the model is high). Naturally, evaluating the work. One problem is that pulling up on the ener- derivative of the log partition function (the second gies of unobserved points in high dimensional spaces term in equation 3) may be intractable when Y is a is often very difficult and even intractable. In particu- high dimensional variable and E(Y, W ) is a compli- lar, we show that probabilistic models use a particular cated function for which the integral has no analytic method for pulling up the energy of unobserved points solution. This is known as the partition function prob- that turns out to be very inefficient in many cases. lem. A considerable amount of literature is devoted We propose new loss functions for pulling up energies to this problem. The intractable integral is often eval- that have efficiency advantages over probabilistic ap- uated through Monte-Carlo sampling methods, varia- proaches. We show that unsupervised methods that tional approximations, or dramatic shortcuts, such as reconstruct the data vectors from internal codes can Hinton’s contrastive divergence method [4]. The basic alleviate the need for explicitly pulling up the energy idea of contrastive divergence is to avoid pulling up of unobserved points by limiting the information con- the energy of every possible point Y , and to merely tent of the code. This principle is illustrated by a new pull up on the energy of randomly generated points model that can learn low-entropy, sparse and overcom- located near the training samples. This ensures that plete codes. The model is trained on natural image training points will become local minima of the energy patches and applied to image denoising at very high surface, which is sufficient in many applications of un- noise levels with state-of-the-art performance. supervised learning. 1.1 Probabilistic Density Models 1.2 Energy-Based Models A probabilistic density model defines a normalized density P (Y ) over the input space. Most probabil- While probabilistic approaches rely on the normaliza- ity densities can be written as deriving from an energy tion of the density to guarantee that the energies of function through the Gibbs distribution: unobserved points are larger than that of the training points, other methods use different strategies. Energy- e−βE(Y,W ) P Y, W based approaches produce suitable energy surfaces by ( )= −βE(y,W ) (1) y e minimizing other loss functionals than the negative R log likelihood (see [1] for a general introduction to where β is an arbitrary positive constant, and the de- energy-based learning in the supervised case). Given nominator is the partition function. If a probabil- a training set T = {Y i, i ∈ 1 ...p}, we must define ity density is not explicitly derived from an energy a parameterized family of energy functions E(Y, W ) function in this way, we simply define the energy as in the form of an architecture, and a loss functional E(Y, W ) = − log P (Y, W ). Training a probabilistic L(E(·, W ),T ) whose role is to measure the “quality” density model from a training dataset T = {Y i, i ∈ (or badness) of the energy surface E(·, W ) on the train- 1 ...p} is generally performed by finding the W that ing set T . An energy surface is “good” if it gives low maximizes the likelihood of the training data under energy to areas around the training samples, and high the model p P (Y i, W ). Equivalently, we can min- i=1 energies to all other areas. The simplest loss functional imize a loss function L(W, T ) that is proportional to Q that one can devise, called the energy loss, is simply the negative log probability of the data. Using the the average energy of the training samples Gibbs expression for P (Y, W ), we obtain: p p 1 1 1 L(W, T )= E(Y i, W )+ log e−βE(y,W ) (2) L (W, T )= E(Y i, W ) (4) p β Z energy p Xi=1 y Xi=1 vector may be performed by finding an area of low energy near that input vector [3, 10, 9]. If the energy surface is smooth, this can be performed using gradient descent. However, many applications require that the model extract codes (or representations) from which the training samples can be reconstructed. The code is often designed to have certain desirable properties, such as compactness, independence of the components, or sparseness. Such applications include dimensional- ity reduction, feature extraction, and clustering. The code vector, denoted by Z can be seen as a deter- ministic latent variable in the energy function.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us