Recurrent Neural Networks (RNN) and Long-Short-Term-Memory (LSTM) 1

Recurrent Neural Networks (RNN) and Long-Short-Term-Memory (LSTM) 1

Recurrent Neural Networks (RNN) and Long-Short-Term-Memory (LSTM) 1 Yuan YAO HKUST Summary ´ We have shown: ´ First order optimization methods: GD (BP), SGD, Nesterov, Adagrad, ADAM, RMSPROP, etc. ´ Second order optimzation methods: L-BFGS ´ Regularization methods: Penalty (L2/L1/Elastic), Dropout, Batch Normalization, Data Augmentation, etc. ´ CNN Architectures: LeNet5, Alexnet, VGG, GoogleNet, Resnet ´ Now ´ Recurrent Neural Networks ´ LSTM ´ Reference: ´ Feifei Li, Stanford cs231n Last Time: CNN Architectures Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 8 May 4, 2017 Last Time: CNN Architectures AlexNet and VGG have tons of parameters in the fully connected layers AlexNet: ~62M parameters FC6: 256x6x6 -> 4096: 38M params FC7: 4096 -> 4096: 17M params FC8: 4096 -> 1000: 4M params ~59M params in FC layers! ResNet allows deep networks with small number of params. Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 9 May 4, 2017 Recurrent Neural Networks “Vanilla” Neural Network Vanilla Neural Networks Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 11 May 4, 2017 Recurrent Neural Networks: Process Sequences e.g. Image Captioning image -> sequence of words Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 12 May 4, 2017 Recurrent Neural Networks: Process Sequences e.g. Sentiment Classification sequence of words -> sentiment Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 13 May 4, 2017 Recurrent Neural Networks: Process Sequences e.g. Machine Translation seq of words -> seq of words Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 14 May 4, 2017 Recurrent Neural Networks: Process Sequences e.g. Video classification on frame level Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 15 May 4, 2017 Sequential Processing of Non-Sequence Data Classify images by taking a series of “glimpses” Ba, Mnih, and Kavukcuoglu, “Multiple Object Recognition with Visual Attention”, ICLR 2015. Gregor et al, “DRAW: A Recurrent Neural Network For Image Generation”, ICML 2015 Figure copyright Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra, 2015. Reproduced with permission. Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 16 May 4, 2017 Recurrent Neural Network RNN x Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 18 May 4, 2017 Recurrent Neural Network usually want to y predict a vector at some time steps RNN x Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 19 May 4, 2017 Recurrent Neural Network We can process a sequence of vectors x by applying a recurrence formula at every time step: y RNN new state old state input vector at some time step some function x with parameters W Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 20 May 4, 2017 Recurrent Neural Network We can process a sequence of vectors x by applying a recurrence formula at every time step: y RNN Notice: the same function and the same set x of parameters are used at every time step. Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 21 May 4, 2017 The basics of decision trees. Vanilla Recurrent Neural NetworksRegression trees State(Vanilla) Space equations Recurrent in feedback dynamical Neural systems Network The state consists of a single “hidden” vector h: y Trees can be applied to both regression and classifcation. • CARTRNN refers to classification and regression trees. • We first consider regression trees through an example of predicting • Baseballx players’ salaries. Or, yt = softmax(Whyht) Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 22 May 4, 2017 Yuan YAO (HKUST) March 22, 2018 6 / 67 RNN: Computational Graph h f h f h f h h 0 W 1 W 2 W 3 … T x1 x2 x3 Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 25 May 4, 2017 Time invariant systems RNN: Computational Graph Re-use the same weight matrix at every time-step h f h f h f h h 0 W 1 W 2 W 3 … T x x x W 1 2 3 Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 26 May 4, 2017 Outputs added RNN: Computational Graph: Many to Many y y y y1 2 3 T h f h f h f h h 0 W 1 W 2 W 3 … T x x x W 1 2 3 Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 27 May 4, 2017 Loss modules RNN: Computational Graph: Many to Many L y L y L y L y1 L1 2 2 3 3 T T h f h f h f h h 0 W 1 W 2 W 3 … T x x x W 1 2 3 Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 29 May 4, 2017 RNN: Computational Graph: Many to One y h f h f h f h h 0 W 1 W 2 W 3 … T x x x W 1 2 3 Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 30 May 4, 2017 RNN: Computational Graph: One to Many y y y y3 3 3 T h f h f h f h h 0 W 1 W 2 W 3 … T x W Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 31 May 4, 2017 Sequence to Sequence: Many-to-one + one-to-many One to many: Produce output sequence from single input vector Many to one: Encode input sequence in a single vector y y 1 2 h h h h h f f f … h h … W W W fW fW fW 0 1 2 3 T 1 2 x x x W W 1 2 3 1 2 Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 33 May 4, 2017 Example: Character-level Language Model Vocabulary: [h,e,l,o] Example training sequence: “hello” Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 34 May 4, 2017 Example: Character-level Language Model Vocabulary: [h,e,l,o] Example training sequence: “hello” Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 35 May 4, 2017 Example: Character-level Language Model Vocabulary: [h,e,l,o] Example training sequence: “hello” Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 36 May 4, 2017 “e” “l” “l” “o” Example: Sample .03 .25 .11 .11 Character-level .13 .20 .17 .02 Softmax .00 .05 .68 .08 Language Model .84 .50 .03 .79 Sampling Vocabulary: [h,e,l,o] At test-time sample characters one at a time, feed back to model Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 37 May 4, 2017 “e” “l” “l” “o” Example: Sample .03 .25 .11 .11 Character-level .13 .20 .17 .02 Softmax .00 .05 .68 .08 Language Model .84 .50 .03 .79 Sampling Vocabulary: [h,e,l,o] At test-time sample characters one at a time, feed back to model Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 38 May 4, 2017 “e” “l” “l” “o” Example: Sample .03 .25 .11 .11 Character-level .13 .20 .17 .02 Softmax .00 .05 .68 .08 Language Model .84 .50 .03 .79 Sampling Vocabulary: [h,e,l,o] At test-time sample characters one at a time, feed back to model Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 39 May 4, 2017 “e” “l” “l” “o” Example: Sample .03 .25 .11 .11 Character-level .13 .20 .17 .02 Softmax .00 .05 .68 .08 Language Model .84 .50 .03 .79 Sampling Vocabulary: [h,e,l,o] At test-time sample characters one at a time, feed back to model Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 40 May 4, 2017 Forward through entire sequence to compute loss, then backward through Backpropagation through time entire sequence to compute gradient Loss Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 41 May 4, 2017 Truncated Backpropagation through time Loss Run forward and backward through chunks of the sequence instead of whole sequence Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 42 May 4, 2017 Truncated Backpropagation through time Loss Carry hidden states forward in time forever, but only backpropagate for some smaller number of steps Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 43 May 4, 2017 Truncated Backpropagation through time Loss Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 44 May 4, 2017 Example: Text->RNN y RNN x https://gist.github.com/karpathy/d4dee566867f8291f086 Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 46 May 4, 2017 at first: train more train more train more Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 47 May 4, 2017 Image Captioning Figure from Karpathy et a, “Deep Visual-Semantic Alignments for Generating Image Descriptions”, CVPR 2015; figure copyright IEEE, 2015. Explain Images with Multimodal Recurrent Neural Networks, Mao et al. Reproduced for educational purposes. Deep Visual-Semantic Alignments for Generating Image Descriptions, Karpathy and Fei-Fei Show and Tell: A Neural Image Caption Generator, Vinyals et al. Long-term Recurrent Convolutional Networks for Visual Recognition and Description, Donahue et al. Learning a Recurrent Visual Representation for Image Caption Generation, Chen and Zitnick Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 63 May 4, 2017 Recurrent Neural Network Convolutional Neural Network Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 64 May 4, 2017 test image test image X test image y0 before: h = tanh(Wxh * x + Whh * h) h0 Wih now: h = tanh(Wxh * x + Whh * h + Wih * v) x0 <STA RT> v <START> test image y0 sample! h0 x0 <STA straw RT> <START> test image y0 y1 sample! h0 h1 x0 <STA straw hat RT> <START> test image y0 y1 y2 sample <END> token h0 h1 h2 => finish. x0 <STA straw hat RT> <START> Captions generated using neuraltalk2 All images are CC0 Public domain: cat suitcase, cat tree, dog, bear, Image Captioning: Example Results surfers, tennis, giraffe, motorcycle Popular Architectures A cat sitting on a A cat is sitting on a tree A dog is running in the A white teddy bear sitting in suitcase on the floor branch grass with a frisbee the grass Two people walking on A tennis player in action Two giraffes standing in a A man riding a dirt bike on the beach with surfboards on the court grassy field a dirt track Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 75 May 4, 2017 Captions generated using neuraltalk2 All images are CC0 Public domain: fur Image Captioning: Failure Cases coat, handstand, spider

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    71 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us