
On orthogonality and learning recurrent networks with long term dependencies Eugene Vorontsov 1 2 Chiheb Trabelsi 1 2 Samuel Kadoury 1 3 Chris Pal 1 2 Abstract stabilize the norm of propagating signals directly by penal- It is well known that it is challenging to train izing differences in successive norm pairs in the forward deep neural networks and recurrent neural net- pass and Pascanu et al.(2013) propose to penalize succes- works for tasks that exhibit long term dependen- sive gradient norm pairs in the backward pass. These regu- cies. The vanishing or exploding gradient prob- larizers affect the network parameterization with respect to lem is a well known issue associated with these the data instead of penalizing weights directly. challenges. One approach to addressing vanish- Both expansivity and contractivity of linear transforma- ing and exploding gradients is to use either soft tions can also be limited by more tightly bounding their or hard constraints on weight matrices so as to spectra. By limiting the transformations to be orthogonal, encourage or enforce orthogonality. Orthogo- their singular spectra are limited to unitary gain causing nal matrices preserve gradient norm during back- the transformations to be norm-preserving. Le et al.(2015) propagation and may therefore be a desirable and Henaff et al.(2016) have respectively shown that iden- property. This paper explores issues with opti- tity initialization and orthogonal initialization can be ben- mization convergence, speed and gradient stabil- eficial. Arjovsky et al.(2015) have gone beyond initial- ity when encouraging or enforcing orthogonality. ization, building unitary recurrent neural network (RNN) To perform this analysis, we propose a weight models with transformations that are unitary by construc- matrix factorization and parameterization strat- tion which they achieved by composing multiple basic uni- egy through which we can bound matrix norms tary transformations. The resulting transformations, for and therein control the degree of expansivity in- some n-dimensional input, cover only some subset of pos- duced during backpropagation. We find that hard sible n × n unitary matrices but appear to perform well on constraints on orthogonality can negatively af- simple tasks and have the benefit of having low complexity fect the speed of convergence and model perfor- in memory and computation. Similarly, (Jing et al., 2016) mance. introduce an efficient algorithm to cover a large subset. The entire set of possible unitary or orthogonal parame- 1. Introduction terizations forms the Stiefel manifold. At a much higher computational cost, gradient descent optimization directly The depth of deep neural networks confers representational along this manifold can be done via geodesic steps (Nishi- power, but also makes model optimization more challeng- mori, 2005; Tagare, 2011). Recent work (Wisdom et al., ing. Training deep networks with gradient descent based 2016) has proposed the optimization of unitary matrices methods is known to be difficult as a consequence of the along the Stiefel manifold using geodesic gradient descent. vanishing and exploding gradient problem (Hochreiter & To produce a full-capacity parameterization for unitary ma- arXiv:1702.00071v4 [cs.LG] 12 Oct 2017 Schmidhuber, 1997). Typically, exploding gradients are trices they use some insights from Tagare(2011), combin- avoided by clipping large gradients (Pascanu et al., 2013) ing the use of canonical inner products and Cayley trans- or introducing an L2 or L1 weight norm penalty. The latter formations. Their experimental work indicates that full ca- has the effect of bounding the spectral radius of the linear pacity unitary RNN models can solve the copy memory transformations, thus limiting the maximal gain across the problem whereas both LSTM networks and restricted ca- transformation. Krueger & Memisevic(2015) attempt to pacity unitary RNN models having similar complexity ap- pear unable to solve the task for a longer sequence length 1Ecole´ Polytechnique de Montreal,´ Montreal,´ Canada 2Montreal Institute for Learning Algorithms, Montreal,´ Canada (T = 2000). Hyland & Ratsch¨ (2017) and Mhammedi et al. 3CHUM Research Center, Montreal,´ Canada. Correspondence to: (2016) introduced more computationally efficient full ca- Eugene Vorontsov <[email protected]>. pacity parameterizations. Harandi & Fernando(2016) also th find that the use of fully connected “Stiefel layers” im- Proceedings of the 34 International Conference on Machine proves the performance of some convolutional neural net- Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). works. On orthogonality and learning RNNs with long term dependencies We seek to gain a new perspective on this line of re- linearity’s Jacobian and transition matrix at time t (layer search by exploring the optimization of real valued matri- i), as follows: ces within a configurable margin about the Stiefel mani- @at+1 fold. We suspect that a strong constraint of orthogonality ≤ jjDtjj jjWtjj ≤ λDt λWt = ηt; limits the model’s representational power, hindering its per- @at (5) formance, and may make optimization more difficult. We λDt ; λWt 2 R: explore this hypothesis empirically by employing a factor- where λ and λ are the largest singular values of the ization technique that allows us to limit the degree of devia- Dt Wt non-linearity’s Jacobian D and the transition matrix W . tion from the Stiefel manifold1. While we use geodesic gra- t t In RNNs, W is shared across time and can be simply de- dient descent, we simultaneously update the singular spec- t noted as W. tra of our matrices along Euclidean steps, allowing opti- mization to step away from the manifold while still curving Equation5 shows that the gradient can grow or shrink at about it. each layer depending on the gain of each layer’s linear transformation W and the gain of the Jacobian D. The 1.1. Vanishing and Exploding Gradients gain caused by each layer is magnified across all time steps or layers. It is easy to have extreme amplification in a The issue of vanishing and exploding gradients as it per- recurrent neural network where W is shared across time tains to the parameterization of neural networks can be illu- steps and a non-unitary gain in W is amplified exponen- minated by looking at the gradient back-propagation chain tially. The phenomena of extreme growth or contraction through a network. of the gradient across time steps or layers are known as A neural network with N hidden layers has pre-activations the exploding and the vanishing gradient problems, respec- tively. It is sufficient for RNNs to have ηt ≤ 1 at each time t to enable the possibility of vanishing gradients, typ- ai(hi−1) = Wi hi−1 + bi; i 2 f2; ··· ;N − 1g (1) ically for some large number of time steps T . The rate at For notational convenience, we combine parameters Wi which a gradient (or forward signal) vanishes depends on and bi to form an affine matrix θ. We can see that for both the parameterization of the model and on the input some loss function L at layer n, the derivative with respect data. The parameterization may be conditioned by placing to parameters θi is: appropriate constraints on W. It is worth keeping in mind that the Jacobian D is typically contractive, thus tending to be norm-reducing) and is also data-dependent, whereas @L @an+1 @L = (2) W can vary from being contractive to norm-preserving, to @θ @θ @a i i n+1 expansive and applies the same gain on the forward signal The partial derivatives for the pre-activations can be de- as on the back-propagated gradient signal. composed as follows: @a @a @h @a 2. Our Approach i+1 = i i i+1 @θ @θ @a @h i i i i (3) Vanishing and exploding gradients can be controlled to a @ai @ai+1 = DiWi+1 ! = DiWi+1; large extent by controlling the maximum and minimum @θi @ai gain of W. The maximum gain of a matrix W is given where Di is the Jacobian corresponding to the activation by the spectral norm which is given by function, containing partial derivatives of the hidden units " # at layer i + 1 with respect to the pre-activation inputs. Typ- jjWxjj jjWjj2 = max : (6) ically, D is diagonal. Following the above, the gradient in jjxjj equation2 can be fully decomposed into a recursive chain of matrix products: By keeping our weight matrix W close to orthogonal, one n can ensure that it is close to a norm-preserving transfor- @L @ai Y @L = (D W ) (4) mation (where the spectral norm is equal to one, but the @θ @θ j j+1 @a i i j=i n+1 minimum gain is also one). One way to achieve this is via a simple soft constraint or regularization term of the form: In Pascanu et al.(2013), it is shown that the 2-norm of X T 2 @ai+1 λ jjW W − Ijj : (7) is bounded by the product of the norms of the non- i i @ai i 1Source code for the model and experiments located at However, it is possible to formulate a more direct pa- https://github.com/veugene/spectre_release rameterization or factorization for W which permits hard On orthogonality and learning RNNs with long term dependencies bounds on the amount of expansion and contraction in- within which the singular values must lie. This is achieved duced by W. This can be achieved by simply parame- with the parameterization terizing W according to its singular value decomposition, which consists of the composition of orthogonal basis ma- si = 2m(σ(pi) − 0:5) + 1; si 2 fdiag(S)g; m 2 [0; 1]: trices U and V with a diagonal spectral matrix S contain- (10) ing the singular values which are real and positive by defi- The singular values are thus restricted to the range nition.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-