Boltzmann Machine Learning with the Latent Maximum Entropy Principle

Boltzmann Machine Learning with the Latent Maximum Entropy Principle

UAI2003 WANG ET AL. 567 Boltzmann Machine Learning with the Latent Maximum Entropy Principle Shaojun Wang Dale Schuurmans Fuchun Peng Yunxin Zhao University of Toronto University of Waterloo University of Waterloo University of Missouri Toronto, Canada Waterloo, Canada Waterloo, Canada Columbia, USA Abstract configurations of the hidden variables (Ackley et al. 1985, Welling and Teh 2003). We present a new statistical learning In this paper we will focus on the key problem of esti­ paradigm for Boltzmann machines based mating the parameters of a Boltzmann machine from on a new inference principle we have pro­ data. There is a surprisingly simple algorithm (Ack­ posed: the latent maximum entropy principle ley et al. 1985) for performing maximum likelihood (LME). LME is different both from Jaynes' estimation of the weights of a Boltzmann machine-or maximum entropy principle and from stan­ equivalently, to minimize the KL divergence between dard maximum likelihood estimation. We the empirical data distribution and the implied Boltz­ demonstrate the LME principle by deriving mann distribution. This classical algorithm is based on new algorithms for Boltzmann machine pa­ a direct gradient ascent approach where one calculates rameter estimation, and show how a robust (or estimates) the derivative of the likelihood function and rapidly convergent new variant of the with respect to the model parameters. The simplic­ EM algorithm can be developed. Our exper­ ity and locality of this gradient ascent algorithm has iments show that estimation based on LME attracted a great deal of interest. generally yields better results than maximum However, there are two significant problems with the likelihood estimation when inferring models standard Boltzmann machine training algorithm that from small amounts of data. we address in this paper: First, the convergence rate of the standard algorithm is extremely slow. It is well known that gradient ascent converges only linearly, 1 Introduction with a rate that depends critically on the condition Boltzmann machines are probabilistic networks of bi­ of the Hessian matrix of the likelihood function. In nary valued random variables that have wide applica­ order to achieve faster convergence one generally has tion in areas of pattern recognition and combinatorial to adjust the step size. In practice, gradient ascent optimization (Aarts and Korst 1989). A Boltzmann methods often select step sizes by performing an ex­ machine can be represented as an undirected graph­ plicit line search in the gradient direction, which can ical model whose associated probability distribution lead to non-trivial overhead. has a simple quadratic energy function, and hence has Second, it is known that the true distribution is multi­ the form of a Boltzmann-Gibbs distribution. modal, and the likelihood function has multiple lo­ Typically, the random variables are divided into a set cal maxima. This raises the question of which local of visible variables, whose states are clamped at ob­ maximizer to choose as the final estimate. Fisher's served data values, and a separate set of hidden vari­ classical maximum likelihood estimation (MLE) prin­ ables, whose values are unobserved. Inference, there­ ciple states that the desired estimate corresponds to fore, usually consists of calculating the conditional the global maximizer of the likelihood function. How­ probability of a configuration over the hidden vari­ ever, in practice it is often observed that MLE leads ables, or finding a most likely configuration of the to over-fitting (poor generalization) particularly when hidden variables, given observed values for the visible faced with limited training data. variables. Inference in Boltzmann machines is known To address both of the above problems in the context to be hard, because this generally involves summing of Boltzmann machines, we have recently proposed a or maximizing over an exponential number of possible 568 WANG ET AL. UA12003 new statistical machine learning framework for density 2 The latent maximum entropy estimation and pattern classification, which we refer principle to as the latent maximum entropy (LME) principle (Wang et a!. 2003).1 Although classical statistics is To formulate the LME principle, let X E X be a ran­ heavily based on parametric models, such an approach dom variable denoting the complete data, Y E Y be can sometimes be overly restrictive and prevent accu­ the observed incomplete data and Z E Z be the miss­ rate models from being developed. The alternative ing data. That is, X = (Y, Z). If we let p(x) and principle we propose, LME, is a non-parametric ap­ p(y) denote the densities of X and Y respectively, and proach based on matching a set of features in the data let p(zjy) denote the conditional density of Z given Y, (i.e. sufficient statistics, weak learners, or basis func­ where = then p(y) = fzEZp(x) f.l(dz) p(x) p(y)p(zjy). tions). This technique becomes parametric when we LME principle Given features !J, ... specifying necessarily have to approximate the principle. LME is JN, the properties we would like to match in the data, an extension to Jaynes' maximum entropy (ME) prin­ select a joint probability model from the space of all ciple that explicitly incorporates latent variables in the p* distributions over to maximize the joint entropy formulation, and thereby extends the original principle P X to cases where data components are missing. The re­ sulting principle is different from both maximum likeli­ H(p) = p(x) logp(x) f.l(dx) (1) hood estimation and standard maximum entropy, but -1x EX often yields better estimates in the presence of hidden subject to the constraints variables and limited training data. In this paper we demonstrate the LME principle by de­ fi(x)p(x) f.l(dx)= Lfi(y) { fi(x)p(zJy) f.l(dz) 1x z riving new algorithms for Boltzmann machine param­ EX yEY f EZ eter estimation, and show how a robust and rapidly i = l...N, Y and Z not independent (2) convergent new variant of the EM algorithm can be de­ veloped. Our experiments show that estimation based where x = (y, z). Here ji(y) is the empirical distribu­ on LME generally yields better results than maximum tion over the observed data, and Y denotes the set of likelihood estimation, particularly when inferring hid­ observed Y values. Intuitively, the constraints specify den units from small amounts of data. that we require the expectations of fi(X) in the com­ The remainder of the paper is organized as follows. plete model to match their empirical expectations on First, in Section 2 we introduce the latent maximum the incomplete data Y, taking into account the struc­ entropy principle, and then outline a general training ture of the dependence of the unobserved component algorithm for this principle in Section 3. Once these Z on Y. preliminaries are in place, we then present the main Unfortunately, there is no simple solution for p* in contribution of this paper-a new training algorithm (1,2). However, a good approximation can be obtained for Boltzmann machine estimation-in Section 4. Sec­ by restricting the model to have an exponential form tion 5 compares the new parameter optimization pro­ cedure to standard procedures, and Section 6 compares ( ) P>..(x) = P 1 exp >. i(x) the generalization performance of the new estimation :\ t d principle to standard maximum likelihood. We con­ clude the paper with a brief discussion in Section 7. where P>.. f exp is a nor­ = xEX (2::�1 >.;J;(x)) f.l(dx) malizing constant that ensures = 1. fx x P>.. (x) f.l(dx) This restriction provides a free parameterE >.; for each feature function k By adopting such a "log-linear" restriction, it turns out that we can formulate a prac­ 1 A third problem, which we do not directly address in tical algorithm for approximately satisfying the LME this paper, is that calculating a gradient involves summing principle. over exponentially many configurations of the hidden vari­ ables, and thus becomes intractable in large networks. It is therefore usually impractical to perform exact gradient 3 A general training algorithm for ascent in these models, and some form of approximate in­ log-linear models ference (Welling and Teh 2003) or Monte Carlo estimation (Ackley et a!. 1985) is needed to approximate the gradi­ To derive a practical training algorithm for log-linear ent. For the purposes of this paper, we simply assume that some sort of reasonable approximation technique is avail­ models, we exploit the following intimate connection able. See (Southey et a!. 2003) for an attempt to improve between LME and maximum likelihood estimation classical estimators for this problem. (MLE). UAI2003 WANG ET AL. 569 Theorem 1 Under the log-linear assumption, maxi­ longer depends on >.. but rather on the fixed constants mizing the likelihood of log-linear models on incom­ from the previous iteration >..U). This means that plete data is equivalent to satisfying the feasibility con­ maximizing (4) subject to (5) with respect to >.. is straints of the LME principle. That is, the only dis­ now a convex optimization problem with linear con­ tinction between MLE and LME in log-linear models straints. The generalized iterative scaling algorithm is that, among local maxima (feasible solutions), LME (GIS) (Darroch et al. 1972) or improved iterative scal­ selects the model with the maximum entropy, whereas ing algorithm (liS) (Della Pietra et al. 1997) can be MLE selects the model with the maximum likelihood used to maximize Q ( >.., >..(j)) very efficiently. (Wang et al. 2003). From these observations, we can recover feasible log­ This connection allows us to exploit an EM algorithm linear models by using an algorithm that combines EM (Dempster et al. 1977) to findfeasible solutions to the with nested iterative scaling to calculate the M step.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us