Introduction to Theano a Fast Python Library for Modelling and Training

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Theano a Fast Python Library for Modelling and Training Introduction to Theano A Fast Python Library for Modelling and Training Pascal Lamblin Institut des algorithmes d'apprentissage de Montréal Montreal Institute for Learning Algorithms Université de Montréal August 4th, Deep Learning Summer School 2015, Montréal Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Overview Programming Tutorials Motivation Basic Usage Graph denition and Syntax Graph structure Strong typing Dierences from Python/NumPy Graph Transformations Substitution and Cloning Gradient Shared variables Make it fast! Optimizations Code Generation GPU Advanced Topics Looping: the scan operation Extending Theano Recent Features Features Coming Soon 2 / 57 Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Objectives These tutorials should be the occasion for you to: I Learn about software tools to implement deep learning algorithms I Get some hand-on experience with these tools I Play with simple implementation of existing algorithms I Ask questions and get help from developers and researchers http://github.com/mila-udem/summerschool2015/ 3 / 57 Wednesday, August 5th (day 3) I GPU programming (NVIDIA) Friday, August 7th (day 5) I Fuel: a library for machine learning datasets (Vincent Dumoulin) I Deep neural networks in Theano (Pascal Lamblin) Monday, August 10th (day 8) I Debugging with Theano (Frédéric Bastien) I Convolutional networks (Arnaud Bergeron) Tuesday, August 11th (day 9) I Scan: Loops in Theano (Pierre Luc Carrier) I Recurrent neural networks (Philémon Brakel) Wednesday, August 12th (day 10) I Overow session Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Tutorials Schedule Tuesday, August 4th (day 2) I Introduction to Theano, Theano examples (Pascal Lamblin) 4 / 57 Friday, August 7th (day 5) I Fuel: a library for machine learning datasets (Vincent Dumoulin) I Deep neural networks in Theano (Pascal Lamblin) Monday, August 10th (day 8) I Debugging with Theano (Frédéric Bastien) I Convolutional networks (Arnaud Bergeron) Tuesday, August 11th (day 9) I Scan: Loops in Theano (Pierre Luc Carrier) I Recurrent neural networks (Philémon Brakel) Wednesday, August 12th (day 10) I Overow session Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Tutorials Schedule Tuesday, August 4th (day 2) I Introduction to Theano, Theano examples (Pascal Lamblin) Wednesday, August 5th (day 3) I GPU programming (NVIDIA) 5 / 57 Monday, August 10th (day 8) I Debugging with Theano (Frédéric Bastien) I Convolutional networks (Arnaud Bergeron) Tuesday, August 11th (day 9) I Scan: Loops in Theano (Pierre Luc Carrier) I Recurrent neural networks (Philémon Brakel) Wednesday, August 12th (day 10) I Overow session Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Tutorials Schedule Tuesday, August 4th (day 2) I Introduction to Theano, Theano examples (Pascal Lamblin) Wednesday, August 5th (day 3) I GPU programming (NVIDIA) Friday, August 7th (day 5) I Fuel: a library for machine learning datasets (Vincent Dumoulin) I Deep neural networks in Theano (Pascal Lamblin) 6 / 57 Tuesday, August 11th (day 9) I Scan: Loops in Theano (Pierre Luc Carrier) I Recurrent neural networks (Philémon Brakel) Wednesday, August 12th (day 10) I Overow session Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Tutorials Schedule Tuesday, August 4th (day 2) I Introduction to Theano, Theano examples (Pascal Lamblin) Wednesday, August 5th (day 3) I GPU programming (NVIDIA) Friday, August 7th (day 5) I Fuel: a library for machine learning datasets (Vincent Dumoulin) I Deep neural networks in Theano (Pascal Lamblin) Monday, August 10th (day 8) I Debugging with Theano (Frédéric Bastien) I Convolutional networks (Arnaud Bergeron) 7 / 57 Wednesday, August 12th (day 10) I Overow session Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Tutorials Schedule Tuesday, August 4th (day 2) I Introduction to Theano, Theano examples (Pascal Lamblin) Wednesday, August 5th (day 3) I GPU programming (NVIDIA) Friday, August 7th (day 5) I Fuel: a library for machine learning datasets (Vincent Dumoulin) I Deep neural networks in Theano (Pascal Lamblin) Monday, August 10th (day 8) I Debugging with Theano (Frédéric Bastien) I Convolutional networks (Arnaud Bergeron) Tuesday, August 11th (day 9) I Scan: Loops in Theano (Pierre Luc Carrier) I Recurrent neural networks (Philémon Brakel) 8 / 57 Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Tutorials Schedule Tuesday, August 4th (day 2) I Introduction to Theano, Theano examples (Pascal Lamblin) Wednesday, August 5th (day 3) I GPU programming (NVIDIA) Friday, August 7th (day 5) I Fuel: a library for machine learning datasets (Vincent Dumoulin) I Deep neural networks in Theano (Pascal Lamblin) Monday, August 10th (day 8) I Debugging with Theano (Frédéric Bastien) I Convolutional networks (Arnaud Bergeron) Tuesday, August 11th (day 9) I Scan: Loops in Theano (Pierre Luc Carrier) I Recurrent neural networks (Philémon Brakel) Wednesday, August 12th (day 10) I Overow session 9 / 57 Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Theano vision Mathematical symbolic expression compiler I Easy to dene expressions I Expressions mimic NumPy's syntax and semantics I Possible to manipulate those expressions I Substitutions I Gradient, R operator I Stability optimizations I Fast to compute values for those expressions I Speed optimizations I Use fast back-ends (CUDA, BLAS, custom C code) I Tools to inspect and check for correctness 10 / 57 Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Current status I Mature: Theano has been developed and used since January 2008 (7 yrs old) I Driven hundreds of research papers I Good user documentation I Active mailing list with participants worldwide I Core technology for Silicon Valley start-ups I Many contributors from dierent places I Used to teach university classes I Has been used for research at large companies Theano: deeplearning.net/software/theano/ Deep Learning Tutorials: deeplearning.net/tutorial/ 11 / 57 Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Related projects Many libraries ar build on top of Theano (mostly machine learning) I Blocks I Keras I Lasagne I Morb I Pylearn2 I PyMC 3 I sklearn-theano I theano-rnn I ... 12 / 57 Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Basic usage Theano denes a language, a compiler, and a library. I Dene a symbolic expression I Compile a function that can compute values I Execute that function on numeric values 13 / 57 Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Dening an expression I Symbolic, strongly-typed inputs import theano from theano import tensor as T x=T.vector(’x’) W=T.matrix(’W’) b=T.vector(’b’) I NumPy-like syntax to build expressions dot=T.dot(x, W) out=T.nnet.sigmoid(dot+ b) 14 / 57 Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Graph visualization (1) debugprint(dot) dot [@A] '' |x [@B] |W [@C] debugprint(out) sigmoid [@A] '' |Elemwise{add,no_inplace} [@B] '' |dot [@C] '' | |x [@D] | |W [@E] |b [@F] 15 / 57 Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Compiling a Theano function Build a callable that compute outputs given inputs f= theano.function(inputs=[x, W], outputs=dot) g= theano.function([x, W, b], out) h= theano.function([x, W, b], [dot, out]) i= theano.function([x, W, b], [dot+ b, out]) 16 / 57 Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics Graph visualization (2) theano.printing.debugprint(f) theano.printing.debugprint(g) CGemv{inplace} [@A] '' 3 Elemwise{ScalarSigmoid}[(0, 0)] [@A] '' 2 |AllocEmpty{dtype='float64'} [@B] '' 2 |CGemv{no_inplace} [@B] '' 1 | |Shape_i{1} [@C] '' 1 |b [@C] | |W [@D] |TensorConstant{1.0} [@D] |TensorConstant{1.0} [@E] |InplaceDimShuffle{1,0} [@E] 'W.T' 0 |InplaceDimShuffle{1,0} [@F] 'W.T' 0 | |W [@F] | |W [@D] |x [@G] |x [@G] |TensorConstant{1.0} [@D] |TensorConstant{0.0} [@H] theano.printing.pydotprint(g) theano.printing.pydotprint(f) 17 / 57 Overview Graph denition and Syntax Programming Tutorials Graph Transformations Motivation Make it fast! Basic Usage Advanced Topics pydotprint(f) name=W TensorType(float6 , matri!" InplaceDimShuffle{1,0} Shape#i{1} TensorType(float6 , matri!" TensorType(int6 , scalar" name=W.T TensorType(float6 , matri!" (lloc)mpty{*type=+float6 +} &al=1.0 TensorType(float6 , scalar" name=! TensorType(float6 , &ector" &al=0.0 TensorType(float6 , scalar" ' 0 TensorType(float6
Recommended publications
  • Theano: a Python Framework for Fast Computation of Mathematical Expressions (The Theano Development Team)∗
    Theano: A Python framework for fast computation of mathematical expressions (The Theano Development Team)∗ Rami Al-Rfou,6 Guillaume Alain,1 Amjad Almahairi,1 Christof Angermueller,7, 8 Dzmitry Bahdanau,1 Nicolas Ballas,1 Fred´ eric´ Bastien,1 Justin Bayer, Anatoly Belikov,9 Alexander Belopolsky,10 Yoshua Bengio,1, 3 Arnaud Bergeron,1 James Bergstra,1 Valentin Bisson,1 Josh Bleecher Snyder, Nicolas Bouchard,1 Nicolas Boulanger-Lewandowski,1 Xavier Bouthillier,1 Alexandre de Brebisson,´ 1 Olivier Breuleux,1 Pierre-Luc Carrier,1 Kyunghyun Cho,1, 11 Jan Chorowski,1, 12 Paul Christiano,13 Tim Cooijmans,1, 14 Marc-Alexandre Cotˆ e,´ 15 Myriam Cotˆ e,´ 1 Aaron Courville,1, 4 Yann N. Dauphin,1, 16 Olivier Delalleau,1 Julien Demouth,17 Guillaume Desjardins,1, 18 Sander Dieleman,19 Laurent Dinh,1 Melanie´ Ducoffe,1, 20 Vincent Dumoulin,1 Samira Ebrahimi Kahou,1, 2 Dumitru Erhan,1, 21 Ziye Fan,22 Orhan Firat,1, 23 Mathieu Germain,1 Xavier Glorot,1, 18 Ian Goodfellow,1, 24 Matt Graham,25 Caglar Gulcehre,1 Philippe Hamel,1 Iban Harlouchet,1 Jean-Philippe Heng,1, 26 Balazs´ Hidasi,27 Sina Honari,1 Arjun Jain,28 Sebastien´ Jean,1, 11 Kai Jia,29 Mikhail Korobov,30 Vivek Kulkarni,6 Alex Lamb,1 Pascal Lamblin,1 Eric Larsen,1, 31 Cesar´ Laurent,1 Sean Lee,17 Simon Lefrancois,1 Simon Lemieux,1 Nicholas Leonard,´ 1 Zhouhan Lin,1 Jesse A. Livezey,32 Cory Lorenz,33 Jeremiah Lowin, Qianli Ma,34 Pierre-Antoine Manzagol,1 Olivier Mastropietro,1 Robert T. McGibbon,35 Roland Memisevic,1, 4 Bart van Merrienboer,¨ 1 Vincent Michalski,1 Mehdi Mirza,1 Alberto Orlandi, Christopher Pal,1, 2 Razvan Pascanu,1, 18 Mohammad Pezeshki,1 Colin Raffel,36 Daniel Renshaw,25 Matthew Rocklin, Adriana Romero,1 Markus Roth, Peter Sadowski,37 John Salvatier,38 Franc¸ois Savard,1 Jan Schluter,¨ 39 John Schulman,24 Gabriel Schwartz,40 Iulian Vlad Serban,1 Dmitriy Serdyuk,1 Samira Shabanian,1 Etienne´ Simon,1, 41 Sigurd Spieckermann, S.
    [Show full text]
  • Practical Deep Learning
    Practical deep learning December 12-13, 2019 CSC – IT Center for Science Ltd., Espoo Markus Koskela Mats Sjöberg All original material (C) 2019 by CSC – IT Center for Science Ltd. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 Unported License, http://creativecommons.org/licenses/by-sa/4.0 All other material copyrighted by their respective authors. Course schedule Thursday Friday 9.00-10.30 Lecture 1: Introduction 9.00-9.45 Lecture 5: Deep to deep learning learning frameworks 10.30-10.45 Coffee break 9.45-10.15 Lecture 6: GPUs and batch jobs 10.45-11.00 Exercise 1: Introduction to Notebooks, Keras 10.15-10.30 Coffee break 11.00-11.30 Lecture 2: Multi-layer 10.30-12.00 Exercise 5: Image perceptron networks classification: dogs vs. cats; traffic signs 11.30-12.00 Exercise 2: Classifica- 12.00-13.00 Lunch break tion with MLPs 13.00-14.00 Exercise 6: Text catego- 12.00-13.00 Lunch break riZation: 20 newsgroups 13.00-14.00 Lecture 3: Images and convolutional neural 14.00-14.45 Lecture 7: Cloud, GPU networks utiliZation, using multiple GPU 14.00-14.30 Exercise 3: Image classification with CNNs 14.45-15.00 Coffee break 14.30-14.45 Coffee break 15.00-16.00 Exercise 7: Using multiple GPUs 14.45-15.30 Lecture 4: Text data, embeddings, 1D CNN, recurrent neural networks, attention 15.30-16.00 Exercise 4: Text sentiment classification with CNNs and RNNs Up-to-date agenda and lecture slides can be found at https://tinyurl.com/r3fd3st Exercise materials are at GitHub: https://github.com/csc-training/intro-to-dl/ Wireless accounts for CSC-guest network behind the badges.
    [Show full text]
  • Comparative Study of Caffe, Neon, Theano, and Torch
    Workshop track - ICLR 2016 COMPARATIVE STUDY OF CAFFE,NEON,THEANO, AND TORCH FOR DEEP LEARNING Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, Mohak Shah Bosch Research and Technology Center fSoheil.Bahrampour,Naveen.Ramakrishnan, fixed-term.Lukas.Schott,[email protected] ABSTRACT Deep learning methods have resulted in significant performance improvements in several application domains and as such several software frameworks have been developed to facilitate their implementation. This paper presents a comparative study of four deep learning frameworks, namely Caffe, Neon, Theano, and Torch, on three aspects: extensibility, hardware utilization, and speed. The study is per- formed on several types of deep learning architectures and we evaluate the per- formance of the above frameworks when employed on a single machine for both (multi-threaded) CPU and GPU (Nvidia Titan X) settings. The speed performance metrics used here include the gradient computation time, which is important dur- ing the training phase of deep networks, and the forward time, which is important from the deployment perspective of trained networks. For convolutional networks, we also report how each of these frameworks support various convolutional algo- rithms and their corresponding performance. From our experiments, we observe that Theano and Torch are the most easily extensible frameworks. We observe that Torch is best suited for any deep architecture on CPU, followed by Theano. It also achieves the best performance on the GPU for large convolutional and fully connected networks, followed closely by Neon. Theano achieves the best perfor- mance on GPU for training and deployment of LSTM networks. Finally Caffe is the easiest for evaluating the performance of standard deep architectures.
    [Show full text]
  • Tensorflow, Theano, Keras, Torch, Caffe Vicky Kalogeiton, Stéphane Lathuilière, Pauline Luc, Thomas Lucas, Konstantin Shmelkov Introduction
    TensorFlow, Theano, Keras, Torch, Caffe Vicky Kalogeiton, Stéphane Lathuilière, Pauline Luc, Thomas Lucas, Konstantin Shmelkov Introduction TensorFlow Google Brain, 2015 (rewritten DistBelief) Theano University of Montréal, 2009 Keras François Chollet, 2015 (now at Google) Torch Facebook AI Research, Twitter, Google DeepMind Caffe Berkeley Vision and Learning Center (BVLC), 2013 Outline 1. Introduction of each framework a. TensorFlow b. Theano c. Keras d. Torch e. Caffe 2. Further comparison a. Code + models b. Community and documentation c. Performance d. Model deployment e. Extra features 3. Which framework to choose when ..? Introduction of each framework TensorFlow architecture 1) Low-level core (C++/CUDA) 2) Simple Python API to define the computational graph 3) High-level API (TF-Learn, TF-Slim, soon Keras…) TensorFlow computational graph - auto-differentiation! - easy multi-GPU/multi-node - native C++ multithreading - device-efficient implementation for most ops - whole pipeline in the graph: data loading, preprocessing, prefetching... TensorBoard TensorFlow development + bleeding edge (GitHub yay!) + division in core and contrib => very quick merging of new hotness + a lot of new related API: CRF, BayesFlow, SparseTensor, audio IO, CTC, seq2seq + so it can easily handle images, videos, audio, text... + if you really need a new native op, you can load a dynamic lib - sometimes contrib stuff disappears or moves - recently introduced bells and whistles are barely documented Presentation of Theano: - Maintained by Montréal University group. - Pioneered the use of a computational graph. - General machine learning tool -> Use of Lasagne and Keras. - Very popular in the research community, but not elsewhere. Falling behind. What is it like to start using Theano? - Read tutorials until you no longer can, then keep going.
    [Show full text]
  • Fashionable Modelling with Flux
    Fashionable Modelling with Flux Michael J Innes Elliot Saba Keno Fischer Julia Computing, Inc. Julia Computing, Inc. Julia Computing, Inc. Edinburgh, UK Cambridge, MA, USA Cambridge, MA, USA [email protected] [email protected] [email protected] Dhairya Gandhi Marco Concetto Rudilosso Julia Computing, Inc. University College London Bangalore, India London, UK [email protected] [email protected] Neethu Mariya Joy Tejan Karmali Birla Institute of Technology and Science National Institute of Technology Pilani, India Goa, India [email protected] [email protected] Avik Pal Viral B. Shah Indian Institute of Technology Julia Computing, Inc. Kanpur, India Cambridge, MA, USA [email protected] [email protected] Abstract Machine learning as a discipline has seen an incredible surge of interest in recent years due in large part to a perfect storm of new theory, superior tooling, renewed interest in its capabilities. We present in this paper a framework named Flux that shows how further refinement of the core ideas of machine learning, built upon the foundation of the Julia programming language, can yield an environment that is simple, easily modifiable, and performant. We detail the fundamental principles of Flux as a framework for differentiable programming, give examples of models that are implemented within Flux to display many of the language and framework-level features that contribute to its ease of use and high productivity, display internal compiler techniques used to enable the acceleration and performance that lies at arXiv:1811.01457v3 [cs.PL] 10 Nov 2018 the heart of Flux, and finally give an overview of the larger ecosystem that Flux fits inside of.
    [Show full text]
  • The Big Picture
    The Bigger Picture John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2021 Building Blocks So far, we have used Fully Connected and Convolutional layers. These are ubiquitous, but there are many others: • Fully Connected (FC) • Convolutional (CNN) • Residual (ResNet) [Feed forward] • Recurrent (RNN), [Feedback, but has vanishing gradients so...] • Long Short Term Memory (LSTM) • Transformer (Attention based) • Bidirectional RNN • Restricted Boltzmann Machine • • Several of these are particularly common... Wikipedia Commons Residual Neural Nets We've mentioned that disappearing gradients can be an issue, and we know that deeper networks are more powerful. How do we reconcile these two phenomenae? One, very successful, method is to use some feedforward. Courtesy: Chris Olah• Helps preserve reasonable gradients for very deep networks • Very effective at imagery • Used by AlphaGo Zero (40 residual CNN layers) in place of previous complex dual network • 100s of layers common, Pushing 1000 #Example: input 3-channel 256x256 image x = Input(shape=(256, 256, 3)) y = Conv2D(3, (3, 3))(x) z = keras.layers.add([x, y]) Haven't all of our Keras networks been built as strict layers in a sequential method? Indeed, but Keras supports a functional API that provides the ability to define network that branch in other ways. It is easy and here (https://www.tensorflow.org/guide/keras/functional) is an MNIST example with a 3 dense layers. More to our current point, here (https://www.kaggle.com/yadavsarthak/residual-networks-and-mnist) is a neat experiment that uses 15(!) residual layers to do MNIST. Not the most effective approach, but it works and illustrates the concept beautifully.
    [Show full text]
  • Theano Tutorial
    Theano Tutorial Theano is a software package which allows you to write symbolic code and compile it onto different architectures (in particular, CPU and GPU). It was developed by machine learning researchers at the University of Montreal. Its use is not limited to machine learning applications, but it was designed with machine learning in mind. It's especially good for machine learning techniques which are CPU-intensive and benefit from parallelization (e.g. large neural networks). This tutorial will cover the basic principles of Theano, including some common mental blocks which come up. It will also cover a simple multi-layer perceptron example. A more thorough Theano tutorial can be found here: http://deeplearning.net/software/theano/tutorial/ Any comments or suggestions should be directed to me or feel free to submit a pull request. In [1]: %matplotlib inline In [2]: # Ensure python 3 forward compatibility from __future__ import print_function import numpy as np import matplotlib.pyplot as plt import theano # By convention, the tensor submodule is loaded as T import theano.tensor as T Basics Symbolic variables In Theano, all algorithms are defined symbolically. It's more like writing out math than writing code. The following Theano variables are symbolic; they don't have an explicit value. In [3]: # The theano.tensor submodule has various primitive symbolic variable types. # Here, we're defining a scalar (0-d) variable. # The argument gives the variable its name. foo = T.scalar('foo') # Now, we can define another variable bar which is just foo squared. bar = foo**2 # It will also be a theano variable.
    [Show full text]
  • Deep Learning Software Security and Fairness of Deep Learning SP18 Today
    Deep Learning Software Security and Fairness of Deep Learning SP18 Today ● HW1 is out, due Feb 15th ● Anaconda and Jupyter Notebook ● Deep Learning Software ○ Keras ○ Theano ○ Numpy Anaconda ● A package management system for Python Anaconda ● A package management system for Python Jupyter notebook ● A web application that where you can code, interact, record and plot. ● Allow for remote interaction when you are working on the cloud ● You will be using it for HW1 Deep Learning Software Deep Learning Software Caffe(UCB) Caffe2(Facebook) Paddle (Baidu) Torch(NYU/Facebook) PyTorch(Facebook) CNTK(Microsoft) Theano(U Montreal) TensorFlow(Google) MXNet(Amazon) Keras (High Level Wrapper) Deep Learning Software: Most Popular Caffe(UCB) Caffe2(Facebook) Paddle (Baidu) Torch(NYU/Facebook) PyTorch(Facebook) CNTK(Microsoft) Theano(U Montreal) TensorFlow(Google) MXNet(Amazon) Keras (High Level Wrapper) Deep Learning Software: Today Caffe(UCB) Caffe2(Facebook) Paddle (Baidu) Torch(NYU/Facebook) PyTorch(Facebook) CNTK(Microsoft) Theano(U Montreal) TensorFlow(Google) MXNet(Amazon) Keras (High Level Wrapper) Mobile Platform ● Tensorflow Lite: ○ Released last November Why do we use deep learning frameworks? ● Easily build big computational graphs ○ Not the case in HW1 ● Easily compute gradients in computational graphs ● GPU support (cuDNN, cuBLA...etc) ○ Not required in HW1 Keras ● A high-level deep learning framework ● Built on other deep-learning frameworks ○ Theano ○ Tensorflow ○ CNTK ● Easy and Fun! Keras: A High-level Wrapper ● Pass on a layer of instances in the constructor ● Or: simply add layers. Make sure the dimensions match. Keras: Compile and train! Epoch: 1 epoch means going through all the training dataset once Numpy ● The fundamental package in Python for: ○ Scientific Computing ○ Data Science ● Think in terms of vectors/Matrices ○ Refrain from using for loops! ○ Similar to Matlab Numpy ● Basic vector operations ○ Sum, mean, argmax….
    [Show full text]
  • A Simple Tutorial on Theano
    A Simple Tutorial on Theano Jiang Guo Outline • What’s Theano? • How to use Theano? – Basic Usage: How to write a theano program – Advanced Usage: Manipulating symbolic expressions • Case study 1: Logistic Regression • Case study 2: Multi-layer Perceptron • Case study 3: Recurrent Neural Network WHAT’S THEANO? Theano is many things • Programming Language • Linear Algebra Compiler • Python library – Define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays. • Note: Theano is not a machine learning toolkit, but a mathematical toolkit that makes building downstream machine learning models easier. – Pylearn2 Theano features • Tight integration with NumPy • Transparent use of a GPU • Efficient symbolic differentiation • Speed and stability optimizations • Dynamic C code generation Project Status • Theano has been developed and used since 2008, by LISA lab at the University of Montreal (leaded by Yoshua Bengio) – Citation: 202 (LTP: 88) • Deep Learning Tutorials • Machine learning library built upon Theano – Pylearn2 • Good user documentation – http://deeplearning.net/software/theano/ • Open-source on Github Basic Usage HOW TO USE THEANO? Python in 1 Slide • Interpreted language • OO and scripting language • Emphasizes code readability • Large and comprehensive standard library • Indentation for block delimiters • Dynamic type • Dictionary – d={‘key1’:‘val1’, ‘key2’:42, …} • List comprehension – [i+3 for i in range(10)] NumPy in 1 Slide • Basic scientific computing package in Python on the CPU • A powerful N-dimensional
    [Show full text]
  • Overview of Deep Learning Stack at NERSC
    Overview of Deep Learning Stack at NERSC Wahid Bhimji, Mustafa Mustafa User Training Jan/2019 Deep learning Stack 2 Deep Learning Stack on HPC Technologies Deep Learning Frameworks Neon, CNTK, MXNet, … Cray ML PE Horovod Multi Node libraries Plugin MLSL MPI GRPC Single Node libraries MKL-DNN CuDNN Hardware CPUs (KNL) GPUs FPGAs Accelerators Software Frameworks ● Different frameworks popularity has evolved rapidly ● Percentage of ML Papers that mention a particular framework: Source: https://twitter.com/karpathy/status/972295865187512320?lang=en ● Caffe and Theano most popular 3-4 years ago ● Then Google released TensorFlow which now dominates ● PyTorch is recently rising rapidly in popularity Framework overview (IMHO) • TensorFlow: – Reasonably easy to use directly within python (not as easy as with Keras) – Very nice tools for development like TensorBoard – Active development for features (e.g. dynamic graphs) and performance (e.g. for CPU/KNL) and ease (e.g. estimators) • Keras: – High-level framework sits on top of tensorflow (or theano) (and now part of TensorFlow). – Very easy to create standard and even advanced deep networks with a lot of templates/ examples Pytorch and Caffe (IMHO) • PyTorch – Relatively recent python adaption of ‘torch’ framework - heavily contributed to by FaceBook – More pythonic than tensorflow/keras – Dynamic graphs from the start - very flexible • Popular with (some) ML researchers – Experimental, some undocumented quirks • Version 1.0 coming soon! rc1 is out already. • Caffe – Optimised performance (still best for certain networks on CPUs) – Relatively difficult to develop new architectures TensorFlow (and Keras) @NERSC http://www.nersc.gov/users/data-analytics/data-analytics-2/deep-learning/using-tensorflow-at-nersc/ • Easiest is to use default anaconda python: module load python python >>> import tensorflow as tf • Active work by intel to optimize for CPU: – Available in anaconda.
    [Show full text]
  • Dynamic Control Flow in Large-Scale Machine Learning
    Dynamic Control Flow in Large-Scale Machine Learning Yuan Yu∗ Martín Abadi Paul Barham Microsoft Google Brain Google Brain [email protected] [email protected] [email protected] Eugene Brevdo Mike Burrows Andy Davis Google Brain Google Brain Google Brain [email protected] [email protected] [email protected] Jeff Dean Sanjay Ghemawat Tim Harley Google Brain Google DeepMind [email protected] [email protected] [email protected] Peter Hawkins Michael Isard Manjunath Kudlur∗ Google Brain Google Brain Cerebras Systems [email protected] [email protected] [email protected] Rajat Monga Derek Murray Xiaoqiang Zheng Google Brain Google Brain Google Brain [email protected] [email protected] [email protected] ABSTRACT that use control flow. Third, our choice of non-strict semantics en- Many recent machine learning models rely on fine-grained dy- ables multiple loop iterations to execute in parallel across machines, namic control flow for training and inference. In particular, models and to overlap compute and I/O operations. based on recurrent neural networks and on reinforcement learning We have done our work in the context of TensorFlow, and it depend on recurrence relations, data-dependent conditional execu- has been used extensively in research and production. We evalu- tion, and other features that call for dynamic control flow. These ate it using several real-world applications, and demonstrate its applications benefit from the ability to make rapid control-flow performance and scalability. decisions across a set of computing devices in a distributed system. For performance, scalability, and expressiveness, a machine learn- CCS CONCEPTS ing system must support dynamic control flow in distributed and • Software and its engineering → Data flow languages; Dis- heterogeneous environments.
    [Show full text]
  • Machine Learning Manual Revision: 58F161e
    Bright Cluster Manager 8.1 Machine Learning Manual Revision: 58f161e Date: Wed Sep 29 2021 ©2020 Bright Computing, Inc. All Rights Reserved. This manual or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Bright Computing, Inc. Trademarks Linux is a registered trademark of Linus Torvalds. PathScale is a registered trademark of Cray, Inc. Red Hat and all Red Hat-based trademarks are trademarks or registered trademarks of Red Hat, Inc. SUSE is a registered trademark of Novell, Inc. PGI is a registered trademark of NVIDIA Corporation. FLEXlm is a registered trademark of Flexera Software, Inc. PBS Professional, PBS Pro, and Green Provisioning are trademarks of Altair Engineering, Inc. All other trademarks are the property of their respective owners. Rights and Restrictions All statements, specifications, recommendations, and technical information contained herein are current or planned as of the date of publication of this document. They are reliable as of the time of this writing and are presented without warranty of any kind, expressed or implied. Bright Computing, Inc. shall not be liable for technical or editorial errors or omissions which may occur in this document. Bright Computing, Inc. shall not be liable for any damages resulting from the use of this document. Limitation of Liability and Damages Pertaining to Bright Computing, Inc. The Bright Cluster Manager product principally consists of free software that is licensed by the Linux authors free of charge. Bright Computing, Inc. shall have no liability nor will Bright Computing, Inc. provide any warranty for the Bright Cluster Manager to the extent that is permitted by law.
    [Show full text]