Outline for Today's Presentation
Total Page:16
File Type:pdf, Size:1020Kb
Outline for today’s presentation • We will see how RNNs and CNNs compare on variety of tasks • Then, we will go through a new approach for Sequence Modelling that has become state of the art • Finally, we will look at a few augmented RNN models RNNs vs CNNs Empirical Evaluation of Generic Networks for Sequence Modelling Let’s say you are given a sequence modelling task of text classification / music note prediction, and you are asked to develop a simple model. What would your baseline model be based on- RNNs or CNNs? Recent Trend in Sequence Modelling • Widely considered as RNNs “home turf” • Recent research has shown otherwise – • Speech Synthesis – WaveNet uses Dilated Convolutions for Synthesis • Char-to-Char Machine Translation – ByteNet uses Encoder-Decoder architecture and Dilated Convolutions. Tested on English-German dataset • Word-to-Word Machine Translation – Hybrid CNN-LSTM on English-Romanian and English-French datasets • Character-level Language Modelling – ByteNet on WikiText dataset • Word-level Language Modelling – Gated CNNs on WikiText dataset Temporal Convolutional Network (TCN) • Model that uses best practices in Convolutional network design • The properties of TCN - • Causal – there is no information leakage from future to past • Memory – It can look very far into the past for prediction/synthesis • Input - It can take any arbitrary length sequence with proper tuning to the particular task. • Simple – It uses no gating mechanism, no complex stacking mechanism and each layer output has the same length as the input • Components of TCN – • 1-D Dilated Convolutions • Residual Connections TCN - Dilated Convolutions 1-D Dilated Convolutions 1-D Convolutions Source - WaveNet TCN - Residual Connections Residual Block of TCN Example of Residual Connection in TCN • Layers learn modification to the identity mapping rather than the transformation • Has shown to be very useful for very deep networks TCN - Weight Normalization • Shortcomings of Batch Normalization – • It needs two passes of the input– one to compute the batch statistics and then to normalize • Takes significant amount of time to be computed for each batch • Dependent on the batch size – not very useful when size is small • Cannot be used when we are training in an online setting • Normalizes the weights with respect to each training example • The main aim is to decouple the magnitude and the direction 푊푗 ∗ 푥 표푗 = 훾푗 + 훽푗 푊푗 + 휖 2 • It has shown to be faster than Batch Norm TCN Advantages/Disadvantages [+] Parallelism – Each layer of a CNN network can be parallelized [+] Receptive field size – Can be easily increased by increasing either filter length, dilation factor or depth [+] Stable gradients – Uses Residual connections and Dropouts [+] Storage (train) – Memory footprint is lesser than RNNs [+] Sequence length – Can be easily adopted for variable input length [-] Storage (test) – During testing, it requires more memory Experimental Setup • TCN filter size, dilation factor and number of layers are chosen to cover the entire receptive field • Vanilla RNN/LSTM/GRU hidden nodes and layers are chosen to have roughly the same number of parameters as TCN • For both the models, the hyperparameter search was used - • Gradient clipping – [0.3, 1] • Dropout – [0, 0.5] • Optimizers - SGD/RMSProp/AdaGrad/Adam • Weights Initialization – Gaussian with 푁(0,0.01) • Exponential dilation (for TCN) Datasets • Adding Problem – • Serves as a stress test for sequence models • Consists of an input of length n and depth 2, with the first dimension being randomly assigned as 0 or 1, and the second dimension having 1s at two places only • Sequential MNIST and P-MNIST – • Tests the ability to remember distant past • Consists of a 784x1 MNIST’s digit image for digit classification. • P-MNIST has the pixels values permuted • Copy Memory – • Tests the memory capacity of the model • Consists of an input of length n+20 with first 10 digits randomly selected from 1 to 8, and the last 10 digits being 9, with everything else as 0 • Goal is to copy the first 10 values to the last ten placeholder values • Polyphonic Music – • Consists of sequence of piano keys of length 88 • Goal is to predict the next key in the sequence • Penn Treebank (PTB) – • It is a small language modelling dataset for both word and char-level • Consists of 5059K characters or 888K words for training • Wikitext-103 – • Consists of 28K Wikipedia articles for word-level language modelling • Consists of 103M words for training • LAMBADA – • Tests the ability to capture longer and broader contexts • Consists of 10K passages extracted from novels and serves as a QnA dataset Results Performance Analysis – TCN vs TCN with Gating Mechanism Inferences • Inferences are made on the following categories – 1. Memory - • Copy memory task was designed to check propagation of information • TCNs achieve almost 100% accuracy whereas RNNs fail at higher sequence lengths • LAMBADA dataset was designed to test the local and broader contexts • TCNs again outperform all its recurrent counterparts 2. Convergence – • In almost all of the tasks, TCNs converged faster than RNNs • The extent of parallelism possible can be one explanation • Concluded that given enough research, TCNs can outperform SOTA RNN models To build a simple Sequence Modelling network What would you choose? RNNs or CNNs? Attention is all you need Md Mehrab Tanjim http://jalammar.github.io/illustrated-transformer/ https://nlp.stanford.edu/seminar/details/lkaiser.pdf Recap: Sequence to sequence Recap: Sequence to sequence Recap: Sequence to sequence Recap: Sequence to sequence Recap: Sequence to sequence Recap: Sequence to sequence Recap: Sequence to sequence Recap: Sequence to sequence Recap: Sequence to sequence w/ attention Recap: Sequence to sequence w/ attention Recap: Sequence to sequence w/ attention Recap: Sequence to sequence w/ attention Recap: Sequence to sequence w/ attention Recap: Sequence to sequence w/ attention Recap: Sequence to sequence w/ attention Recap: Sequence to sequence w/ attention Recap: Sequence to sequence w/ attention Recap: Sequence to sequence w/ attention Recap: Sequence to sequence w/ attention But how can we calculate the scores? Score Refer to the assignment Query (Q) Key (K) Value (V) What functions can be used to calculate the score? 1. Additive attention a. Computes the compatibility function using a feed-forward network with a single hidden layer (given in the assignment) 2. Dot-product (multiplicative) attention a. Dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code (used in transformer, explained here) Dot-product (multiplicative) attention Dot-product (multiplicative) attention Dot-product (multiplicative) attention Why divide by square root of dk ? Scaling Factor Problem: Additive attention outperforms dot product attention without scaling for larger values of dk. Cause: Dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients. Solution: To counteract this effect, scale the dot products by 1/√dk Complexity This is the encoder-decoder attention. Attention Attention between encoder and decoder is crucial in NMT Why not use (self-)attention for the representations? Self Attention ? Self Attention Three ways of Attention Encoder Self Attention ● Each position in the encoder can attend to all positions in the previous layer of the encoder. ● All of the keys, values, and queries come from the same place, in this case, the output of the previous layer in the encoder. Decoder Self Attention ● Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. ● To preserve the auto-regressive property, mask out (setting to −∞) all values in the input of the softmax which correspond to illegal connections. Notice any difference from Convolution? Self-attention ● Convolution: a different linear transformation for each relative position. Allows to distinguish what information came from where. ● Self-Attention: a reduced effective resolution due to averaging attention-weighted positions Multi-head Attention ● Multiple attention layers (heads) in parallel (shown by different colors) ● Each head uses different linear transformations. ● Different heads can learn different relationships. But first we need to encode the position! Positional Encoding Positional Encoding A real example of positional encoding for 20 words (rows) with an embedding size of 512 (columns). You can see that it appears split in half down the center. That's because the values of the left half are generated by one function (which uses sine), and the right half is generated by another function (which uses cosine). They're then concatenated to form each of the positional encoding vectors. Why this function ● Authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k, PEpos+k can be represented as a linear function of PEpos. ● Authors also experimented with using learned positional embeddings instead, and found that the two versions produced nearly identical results. ● Authors chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. Multi-head Attention Multi-head Attention ● Authors employed, h = 8 parallel attention layers, or heads. For each of these, dk = dv = dmodel/h