Lecture 10: Recurrent Neural Networks
Lecture 10: Recurrent Neural Networks Fei-Fei Li, Ranjay Krishna, Danfei Xu Lecture 10 - 1 May 7, 2020 Administrative: Midterm - Midterm next Tue 5/12 take home - 1H 20 Mins (+20m buffer) within a 24 hour time period. - Will be released on Gradescope. - See Piazza for detailed information - Midterm review session: Fri 5/8 discussion section - Midterm covers material up to this lecture (Lecture 10) Fei-Fei Li, Ranjay Krishna, Danfei Xu Lecture 10 - 2 May 7, 2020 Administrative - Project proposal feedback has been released - Project milestone due Mon 5/18, see Piazza for requirements ** Need to have some baseline / initial results by then, so start implementing soon if you haven’t yet! - A3 will be released Wed 5/13, due Wed 5/27 Fei-Fei Li, Ranjay Krishna, Danfei Xu Lecture 10 - 3 May 7, 2020 Last Time: CNN Architectures GoogLeNet AlexNet Fei-Fei Li, Ranjay Krishna, Danfei Xu Lecture 10 - 4 May 7, 2020 Last Time: CNN Architectures ResNet SENet Fei-Fei Li, Ranjay Krishna, Danfei Xu Lecture 10 - 5 May 7, 2020 Comparing complexity... An Analysis of Deep Neural Network Models for Practical Applications, 2017. Figures copyright Alfredo Canziani, Adam Paszke, Eugenio Culurciello, 2017. Reproduced with permission. Fei-Fei Li, Ranjay Krishna, Danfei Xu Lecture 10 - 6 May 7, 2020 Efficient networks... MobileNets: Efficient Convolutional Neural Networks for Mobile Applications [Howard et al. 2017] - Depthwise separable convolutions replace BatchNorm standard convolutions by Pool BatchNorm factorizing them into a C2HW Pointwise Conv (1x1, C->C) convolutions depthwise convolution and a Pool 1x1 convolution BatchNorm 9C2HW Conv (3x3, C->C) - Much more efficient, with Pool little loss in accuracy Stanford network Conv (3x3, C->C, Depthwise 2 9CHW - Follow-up MobileNetV2 work Total compute:9C HW groups=C) convolutions in 2018 (Sandler et al.) MobileNets - ShuffleNet: Zhang et al, Total compute:9CHW + C2HW CVPR 2018 Fei-Fei Li, Ranjay Krishna, Danfei Xu Lecture 10 - May 7, 2020 Meta-learning: Learning to learn network architectures..
[Show full text]