
Deep Restricted Boltzmann Networks Hengyuan Hu Lisheng Gao ∗ Quanbin Ma ∗ Carnegie Mellon University Carnegie Mellon University Carnegie Mellon University [email protected] [email protected] [email protected] Abstract model [16]. Several deeper architectures are later invented to tackle the problem that one layer RBMs fail to model Building a good generative model for image has long complicated probabilistic distributions in practice. Two of been an important topic in computer vision and machine the most successful ones are deep belief network (DBN) learning. Restricted Boltzmann machine (RBM) [5] is one [6, 7] and deep Boltzmann machine (DBM) [15]. Deep be- of such models that is simple but powerful. However, its lief network consists of multiple layers of RBMs trained in restricted form also has placed heavy constraints on the a greedy, layer-by-layer way. The resulted model is a hy- model’s representation power and scalability. Many exten- brid generative model where only the top layer remains an sions have been invented based on RBM in order to pro- undirected RBM while the rest become directed sigmoid be- duce deeper architectures with greater power. The most fa- lief network. Deep Boltzmann machine, on the other hand, mous ones among them are deep belief network [6], which can be viewed as a less-restricted RBM where connections stacks multiple layer-wise pretrained RBMs to form a hy- between hidden units are allowed but restricted to form a brid model, and deep Boltzmann machine [15], which al- multi-layer structure in which there is no intra-layer con- lows connections between hidden units to form a multi-layer nection between hidden units. The resulted model is thus structure. In this paper, we present a new method to com- still a bipartite graph so that efficient learning can be con- pose RBMs to form a multi-layer network style architecture ducted [15, 14]. The learning procedure is often layer-wise and a training method that trains all layers/RBMs jointly. pretrain followed by joint training of the entire DBM. We call the resulted structure deep restricted Boltzmann In this paper, we present a new way to compose RBMs to network. We further explore the combination of convolu- form a deep undirected architecture together with a learning tional RBM with the normal fully connected RBM, which algorithm that trains all layers jointly from scratch. We call is made trivial under our composition framework. Exper- the composed architecture deep restricted Boltzmann net- iments show that our model can generate descent images work (DRBN) because each layer consists of one RBM, and and outperform the normal RBM significantly in terms of the semantic of our architecture is more similar to a multi- image quality and feature quality, without losing much effi- layer neural network than a deep Boltzmann machine. We ciency for training. also show that our model can be extended with convolu- tional RBMs for better scalability. 1. Introduction 2. Background arXiv:1611.07917v1 [cs.LG] 15 Nov 2016 Boltzmann machine (BM) is a family of bidirectionally In this section we will review restricted Boltzmann ma- connected neural network models designed to learn un- chine and its two major multi-layer extensions, i.e. deep be- known probabilistic distributions [2]. The original Boltz- lief network and deep Boltzmann machine. Those three mann machine, however, is seldom useful as its lateral con- models are the foundations and inspirations of our new nections among both visible and hidden units make it com- model. Therefore, it is crucial to understand them in or- putationally impossible to train. Restricted Boltzmann ma- der to identify the differences and advantages of our new chine (RBM) [5] is proposed to address this problem, where deep restricted Boltzmann networks. the connection pattern in a Boltzmann machine is restricted 2.1. Restricted Boltzmann Machine such that no lateral connections are allowed. This makes the learning procedure much more efficient while still main- A restricted Boltzmann machine is an energy based tains enough representation power to be a useful generative model that can be viewed as a single layer undirected neu- ral network. It contains a set of visible units v 2 f0; 1gD, ∗Equal contribution. hidden units h 2 f0; 1gP , where D and P are the numbers 1 Algorithm 1 PCD(k, N) (1) (2) (N) 1: Randomly initialize N particles v0 ; v0 ;:::; v0 . 2: for t = 1 to NUM ITERATION do (j) 3: for all vt ; j = 1; 2;:::;N do (j) Figure 1: Restricted Boltzmann Machine. 4: Do k Gibbs sampling iterations to get vt;k . 5: end for (j) (j) 6: vt+1 vt;k . of visible and hidden units respectively. The parameters in- (j) 7: Use v s and Eq. 10 to compute gradients. volved are θ = fW ; b; cg, denoting the mutual weights, t+1 8: Update parameters with the gradients. visible units’ biases and hidden units’ biases. The energy of 9: end for a given state (v; h) is defined as: E(v; h; θ) = −bT v − cT h − vT W h (1) X X X Although this is a sum of exponential amount of terms, it = − bivi − cjhj − viWijhj: can be easily computed for RBM with binary hidden units. i=1 j=1 i;j It can be proved that F(v) can be rewritten as The probability of a particular configuration of visible state P X X X hj (ci+ viWij ) v in the model is F(v) = − bivi − log e i : (7) i j h 1 X j p(v; θ) = e− E(v;h;θ); (2) Z(θ) h With free energy, the probability of a given visible state in X Equation 2 can be simplified as Z(θ) = e− E(v;h;θ); (3) v;h 1 e−F(v) p(v; θ) = e−F(v) = : P −F(x) (8) where Z(θ) is the partition function. Because RBM restricts Z x e the connections in the graph such that there is no link among We then derive the derivatives as visible units or among hidden units, the hidden units pj be- come conditionally independent given the visible state v, @ − log p(v(i)) @F(v(i)) X @F(v(j)) = − p(v(j)) (9) and vice versa. Hence, the conditional probability of a unit @θ @θ @θ has the following simple form: j 0 1 where v(i)s are the current training data, v(j)s are all the X possible outputs of visible units that can be generated by p(hj = 1jv) = σ @ viWij + cjA ; (4) i the model. 0 1 While the first term in the equation, often noted as X the data-dependent term, can be computed directly given p(v = 1jh) = σ h W + b ; (5) i @ j ij iA training data, the second term, often noted as the model- j dependent term, is almost impossible to compute as the where σ(x) = 1=(1 + exp(−x)) is the sigmoid function. number of possible v(j)s is exponential to the input size. This property allows efficient parallel block Gibbs sampling Persistent contrastive divergence (PCD) [18, 11] has been alternating between v and h, and thus makes the learning widely employed to estimate the second term. The algo- process faster. rithm works as shown in Algorithm 1, where N, the chain The learning algorithm for RBM is conceptually simple. size, denotes the number of PCD particles used. Using PCD With the probability of visible state defined in Equation 2, algorithm to approximate the model-dependent term, Equa- we can perform gradient descent to maximize p(v). The up- tion 9 becomes date rule for parameters can then be derived by computing @ − log p(v(i)) @F(v(i)) 1 X @F(v(j)) the derivative of the negative log-likelihood function with = − : (10) @θ @θ N @θ regard to each parameter. However, in order to facilitate j later discussion, we take a detour to derive the learning rule under a more general energy based model (EBM) frame- This will be a key equation for parameter updates in later work. sections. First we define the free energy of a visible state v as 2.2. Deep Belief Network X F(v) = − log e− E(v;h): (6) Deep belief network, as shown in Figure 2a, is a deep h architecture built upon RBM to increase its representation 2 Note that the equation for the middle layer is different be- cause it depends on both its adjacent neighbors. Block Gibbs sampling can still be performed by alternating be- tween odd and even layers, which makes efficient learn- ing possible. Further more, mean-field method and persis- tent contrastive divergence [18, 11] are employed to make the learning tractable [15, 14]. Note that DBM also needs greedy layer-wise pretraining to reach its best performance when the number of hidden layers is greater than 2. (a) (b) 3. Deep Restricted Boltzmann Network Figure 2: Architectures of (a) Deep Belief Network; (b) Deep Boltzmann Machine. Both deep belief network and deep Boltzmann machine are rich models with enhanced representation power over the simplest RBM but more tractable learning rule over the power by increasing depth. In a DBN, two adjacent layers original BM. However, it is interesting to see whether we are connected in the same way as in RBM. The network is can devise a new rule to stack the simplest RBMs together trained in a greedy, layer-by-layer manner [6], where the such that the resulted model can both generate better images bottom layer is trained alone as an RBM, and then fixed and extract higher quality features.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-