Deep Learning: Recurrent Neural Networks

Deep Learning: Recurrent Neural Networks

CS5621 Machine Learning – Part II – Lecture 5 Deep Learning: Recurrent Neural Networks Dr. Nisansa de Silva, Department of Computer Science & Engineering http://nisansads.staff.uom.lk/ Outline ▪ Recurrent Neural Networks – Vanilla RNN – Exploding and Vanishing Gradients ▪ Networks with Memory – Long Short-Term Memory (LSTM) – Gated Recurrent Unit (GRU) ▪ Sequence Learning Architectures – Sequence Learning with one RNN Layer – Sequence Learning with multiple RNN Layers 2 Recurrent Neural Networks ▪ Human brain deals with information streams. Most data is obtained, processed, and generated sequentially. – E.g., listening: soundwaves vocabularies/sentences – E.g., action: brain signals/instructions sequential muscle movements ▪ Human thoughts have persistence; humans don’t start their thinking from scratch every second. – As you read this sentence, you understand each word based on your prior knowledge. ▪ The applications of standard Artificial Neural Networks (and also Convolutional Networks) are limited due to: – They only accepted a fixed-size vector as input (e.g., an image) and produce a fixed-size vector as output (e.g., probabilities of different classes). – These models use a fixed amount of computational steps (e.g. the number of layers in the model). ▪ Recurrent Neural Networks (RNNs) are a family of neural networks introduced to learn sequential data. – Inspired by the temporal-dependent and persistent human thoughts 3 Recurrent Neural Networks ▪ Many problems are described by sequences – Time series – Video/audio processing – Natural Language Processing (translation, dialogue) – Bioinformatics (DNA/protein) – Process control ▪ Model in a way that extracts the element sequence dependencies 4 Real-life Sequence Learning Applications ▪ RNNs can be applied to various type of sequential data to learn the temporal patterns. – Time-series data (e.g., stock price) Prediction, regression – Raw sensor data (e.g., signal, voice, handwriting) Labels or text sequences – Text Label (e.g., sentiment) or text sequence (e.g., translation, summary, answer) – Image and video Text description (e.g., captions, scene interpretation) Task Input Output Activity Recognition (Zhu et al. 2018) Sensor Signals Activity Labels Machine translation (Sutskever et al. 2014) English text French text Question answering (Bordes et al. 2014) Question Answer Speech recognition (Graves et al. 2013) Voice Text Handwriting prediction (Graves 2013) Handwriting Text Opinion mining (Irsoy et al. 2014) Text Opinion expression 5 Recurrent Neural Networks ▪ We need methods that explicitly model time/order dependencies capable of: – Processing arbitrary length examples – Providing different mappings (one to many, many to one, many to many) 6 RNN Output is to predict a vector ht, where 표푢푡푝푢푡 푦푡 = 휑(ℎ푡) at some time steps (t) ▪ Recurrent Neural Networks are networks with loops, allowing information to persist. ▪ In the above diagram, a chunk of neural network, A = fW, looks at some input xt and outputs a value ht. A loop allows information to be passed from one step of the network to the next. 7 RNN Rolled Out A recurrent neural network can be thought of as multiple copies of the same network, each passing a message to a successor. The diagram above shows what happens if we unroll the loop. 8 RNN Rolled Out A recurrent neural network can be thought of as multiple copies of the same network, each passing a message to a successor. The diagram above shows what happens if we unroll the loop. 8 RNN Rolled Out 9 RNN Backpropagation Through Time 10 RNN Backpropagation Through Time 11 Recurrent Neural Networks ▪ The recurrent structure of RNNs enables the following characteristics: – Specialized for processing a sequence of values 푥 1 , … , 푥 휏 ▪ Each value 푥 is processed with the same network A that preserves past information – Can scale to much longer sequences than would be practical for networks without a recurrent structure ▪ Reusing network A reduces the required amount of parameters in the network – Can process variable-length sequences ▪ The network complexity does not vary when the input length change – However, vanilla RNNs suffer from the training difficulty due to exploding and vanishing gradients. 13 Exploding and Vanishing Gradients Cliff/boundary Plane/attractor ▪ Exploding: If we start almost exactly on the boundary (cliff), tiny changes can make a huge difference. ▪ Vanishing: If we start a trajectory within an attractor (plane, flat surface), small changes in where we start make no difference to where we end up. ▪ Both cases hinder the learning process. 13 Exploding and Vanishing Gradients 퐸4 = 퐽(푦4, 푦ෞ4) In vanilla RNNs, computing this gradient involves many factors of 푊ℎℎ (and repeated tanh)*. If we decompose the singular values of the gradient multiplication matrix, – Largest singular value > 1 Exploding gradients ▪ Slight error in the late time steps causes drastic updates in the early time steps Unstable learning – Largest singular value < 1 Vanishing gradients ▪ Gradients passed to the early time steps is close to 0. Uninformed correction * Refer to Bengio et al. (1994) or Goodfellow et al. (2016) for a complete derivation 14 RNN Truncated Backpropagation Through Time 15 RNN Truncated Backpropagation Through Time 16 Networks with Memory ▪ Vanilla RNN operates in a “multiplicative” way (repeated tanh). ▪ Two recurrent cell designs were proposed and widely adopted: StandardGRU LSTM Cell Cell – Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) – Gated Recurrent Unit (GRU) (Cho et al. 2014) ▪ Both designs process information in an “additive” way with gates to control information flow. – Sigmoid gate outputs numbers between 0 and 1, describing how A Sigmoid Gate much of each component should be let through. E.g., 푓푡 = 휎 푊푓 ∙ ℎ푡−1, 푥푡 + 푏푓 = 푆푔푚표푑 푊푓푥푡 + 푈푡ℎ푡−1 + 푏푓 18 Long Short-Term Memory (LSTM) 18 Long Short-Term Memory (LSTM) ▪ The key to LSTMs is the cell state. – Stores information of the past long-term memory – Passes along time steps with minor linear interactions “additive” – Results in an uninterrupted gradient flow errors in the past pertain and impact learning in the future Cell State 퐶 ▪ The LSTM cell manipulates input information with three 푡 gates. – Input gate controls the intake of new information Gradient Flow – Forget gate determines what part of the cell state to be updated – Output gate determines what part of the cell state to output 20 Step-by-step LSTM Walk Through 푓푡 = 휎 푊푓 ∙ ℎ푡−1, 푥푡 + 푏푓 Forget gate ▪ Step 1: Decide what information to throw away from the cell state (memory) 푓푡 ∗ 퐶푡−1 – The output of the previous state ℎ푡−1 and the new information 푥푡 jointly determine what to forget – ℎ푡−1 contains selected features from the memory 퐶푡−1 – Forrget gate 푓푡 ranges between [0, 1] – Text processing example: Cell state may include the gender of the current subject (ℎ푡−1). When the model observes a new subject (푥푡), it may want to forget (푓푡 → 0) the old subject in the memory (퐶푡−1). 21 Step-by-step LSTM Walk Through Alternative Input gate cell state 푡 = 휎 푊 ∙ ℎ푡−1, 푥푡 + 푏 퐶ሚ푡 = tanh 푊퐶 ∙ ℎ푡−1, 푥푡 + 푏퐶 ▪ Step 2: Prepare the updates for the cell state from input 푡 ∗ 퐶ሚ푡 – An alternative cell state 퐶෩푡 is created from the new information 푥푡 with the guidance of ℎ푡−1. – Input gate 푡 ranges between [0, 1] – Example: The model may want to add (푡 → 1) the gender of new subject (퐶෩푡) to the cell state to replace the old one it is forgetting. 22 Step-by-step LSTM Walk Through New cell state 퐶푡 = 푓푡 ∗ 퐶푡−1 + 푡 ∗ 퐶ሚ푡 ▪ Step 3: Update the cell state 푓푡 ∗ 퐶푡−1 + 푡 ∗ 퐶ሚ푡 – The new cell state 퐶푡 is comprised of information from the past 푓푡 ∗ 퐶푡−1 and valuable new information 푡 ∗ 퐶෩푡 – ∗ denotes elementwise multiplication – Example: The model drops the old gender information (푓푡 ∗ 퐶푡−1) and adds new gender information (푡 ∗ 퐶෩푡) to form the new cell state (퐶푡). 23 Step-by-step LSTM Walk Through Output gate 표푡= 휎 푊표 ∙ ℎ푡−1, 푥푡 + 푏표 ℎ푡 = 표푡 ∗ tanh 퐶푡 ▪ Step 4: Decide the filtered output from the new cell state 표푡 ∗ tanh 퐶푡 – tanh function filters the new cell state to characterize stored information ▪ Significant information in 퐶푡 ±1 ▪ Minor details 0 – Output gate 표푡 ranges between 0, 1 – ℎ푡 serves as a control signal for the next time step – Example: Since the model just saw a new subject (푥푡), it might want to output (표푡 → 1) information relevant to a verb (tanh(퐶푡)), e.g., singular/plural, in case a verb comes next. 24 LSTM: Components & Flow ▪ LSM unit output ℎ푡 = 표푡 ∗ tanh 퐶푡 ▪ Output gate units 표푡= 휎 푊표 ∙ ℎ푡−1, 푥푡 + 푏표 ▪ Transformed memory cell contents tanh 퐶푡 ▪ Gated update to memory cell units 퐶푡 = 푓푡 ∗ 퐶푡−1 + 푡 ∗ 퐶ሚ푡 ▪ Forget gate units 푓푡 = 휎 푊푓 ∙ ℎ푡−1, 푥푡 + 푏푓 ▪ Input gate units 푡 = 휎 푊 ∙ ℎ푡−1, 푥푡 + 푏 ▪ Potential input to memory cell 퐶ሚ푡 = tanh 푊퐶 ∙ ℎ푡−1, 푥푡 + 푏퐶 25 Gated Recurrent Unit (GRU) ▪ GRU is a variation of LSTM that also adopts the gated design. ▪ Differences: – GRU uses an update gate 풛 to substitute the input and forget gates 푡 and 푓푡 – Combined the cell state 퐶푡 and hidden state ℎ푡 in LSTM as a single cell state ℎ푡 ▪ GRU obtains similar performance compared to LSTM with fewer parameters and faster convergence. (Cho et al. 2014) ▪ Update gate: controls the composition of the new state 푧푡 = 휎 푊푧 ∙ ℎ푡−1, 푥푡 ෩ ▪ Reset gate: determines how much old information is needed in the alternative state ℎ푡 푟푡 = 휎 푊푟 ∙ ℎ푡−1, 푥푡 ▪ Alternative state: contains new information ℎ෨푡 = tanh 푊 ∙ 푟푡 ∗ ℎ푡−1, 푥푡 ▪ New state: replace selected old information with new information in the new state ℎ푡 = 1 − 푧푡 ∗ ℎ푡−1 + 푧푡 ∗ ℎ෨푡 26 Sequence Learning Architectures ▪ Learning on RNN is more robust when the vanishing/exploding gradient problem is resolved.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    35 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us