Tednet: a Pytorch Toolkit for Tensor Decomposition Networks

Total Page:16

File Type:pdf, Size:1020Kb

Tednet: a Pytorch Toolkit for Tensor Decomposition Networks TedNet: A Pytorch Toolkit for Tensor Decomposition Networks Yu Pana, Maolin Wangc, Zenglin Xua,b,∗ aHarbin Institute of Technology Shenzhen, Shenzhen, China bPengcheng Lab, Shenzhen, China cUniversity of Electronic Science and Technology of China, Chengdu, China Abstract Tensor Decomposition Networks(TDNs) prevail for their inherent com- pact architectures. For providing convenience, we present a toolkit named TedNet that is based on the Pytorch framework, to give more researchers a flexible way to exploit TDNs. TedNet implements 5 kinds of tensor de- composition(i.e., CANDECOMP/PARAFAC(CP), Block-Term Tucker(BT), Tucker-2, Tensor Train(TT) and Tensor Ring(TR)) on traditional deep neu- ral layers, the convolutional layer and the fully-connected layer. By utilizing these basic layers, it is simple to construct a variety of TDNs like TR-ResNet, TT-LSTM, etc. TedNet is available at https://github.com/tnbar/tednet. Keywords: Tensor Decomposition Networks, Deep Neural Networks, Tensor Optimization 1. Introduction Tensor Decomposition Networks(TDNs) are constructed by decomposing deep neural layers with tensor formats. For the reason that the original tensor arXiv:2104.05018v1 [cs.LG] 11 Apr 2021 of a layer can be recovered from tensor decomposition cores, TDNs are often regarded as a compression method for the corresponding networks. Com- pared with traditional networks like Convolution Neural Networks(CNNs) and Recurrent Neural Networks(RNNs), TDNs can be much smaller and ∗Corresponding author Email addresses: [email protected] (Yu Pan), [email protected] (Maolin Wang), [email protected] (Zenglin Xu) Preprint submitted to Neurocomputing April 13, 2021 TD Block Tensor Train Convolution Tensor Ring TD Block Tucker-2 TD ResNet Linear Block-Term Tucker CP Tensor Decomposition Layer TD Cell PyTorch NumPy TD RNN Figure 1: The framework of TedNet. TedNet is based on Pytorch and adopts NumPy to process numerical calculations. Tensor decomposition (TD) can be applied to convo- lutional layers or linear layers. We implemented several variants of tensor decomposition methods, including CP, Tucker, Tensor Ring, Tensor Train, and Block-term Tucker. Ten- sor decomposition can be fulfilled in convolution neural networks. An illustration of two tensorial classical neural blocks(i.e., ResNet and RNN) that are built on the Tensor De- composition Layer is shown in the right of the figure. occupy a little memory. For example, TT-LSTM [1], BT-LSTM [2], TR- LSTM [3] are able to reduce 17554, 17414 and 34192 times parameters with a higher accuracy than the original models. With light-weight architectures and good performance, TDNs are promising to be used in kinds of source- restricted scenes like mobile equipment, microcomputers and so on. Under this background, we design TedNet package for providing convenience for researchers to explore on TDNs. Related packages are T3F [4], Tensorly [5], TensorD [6], TensorNetwork [7], tntorch [8]. T3F is explicitly designed for Tensor Train Decomposition on Tensorflow [9]. Similarly based on Tensorflow, TensorD supports CP and Tucker decomposition. By contrast, to provide sufficient help, TedNet im- plements 5 kinds of tensor decomposition with backend Pytorch [10]. Tensor- Network and Tensorly incorporate abundant tensor calculation tools. Ten- sorNetwork is built on Tensorflow, while Tensorly runs with a variety of backends like CuPy, Pytorch, Tensorflow and so on. Unfortunately, Tensor- Network and Tensorly both serve for tensor decomposition algorithms rather 2 Function Description set tn type Set the tensor decomposition type. Generate tensor decomposition nodes, set nodes then edit node information. set params info Record information of Parameters. tn contract The function of contracting inputs and tensor nodes. Table 1: Functions of TNBase. than TDNs and lack support to Application Programming Interface(API) to build tensorial neural networks directly. Compared with them, TedNet can set up a TDN layer quickly by calling API directly. In addition, we also provide several deep TDNs that are popular for researchers now. Due to the Dynamic Graph Mechanism of Pytorch, TedNet is also flexible to DEBUG for programmers. 2. TedNet Details TedNet is designed with the goal of building TDNs by calling correspond- ing APIs, which can extremely simplify the process of constructing TDNs. As shown in Figure 1, TedNet adopts Pytorch as the training framework be- cause of its auto differential function and convenience to build DNN models. Specifically, the fundamental module of TedNet is TNBase, which is an abstracted class and inherits from torch.nn.Module. Thus, TedNet mod- els can be amicably combined with other Pytorch models. As an abstracted class, TNBase requires sub-classes to implement 4 functions as described in Table 1. In addition, for better numerical calculations, TedNet also uses NumPy [11] to assist in tensor operations. Usually, DNNs are constructed with CNNs and Linears. The weight of a CNN is a 4-mode tensor C 2 RK×K×Cin×Cout , where K means the con- volutional window, Cin denotes the input channel and Cout represents the counterpart output channel. And a Linear is a matrix W 2 RI×O, where I and O are length of input and output feature respectively. Similar to DNNs, TDNs consist of TD-CNNs and TD-Linears(For simplification, TD- denotes the corresponding tensor decomposition model), whose weights C and W are factorized with tensor decomposition. Following this pattern, there are 5 frequently-used tensor decomposition(i.e. CP, Tucker-2, Block-Term Tucker, Tensor Train and Tensor Ring) in TedNet, which satisfies most of common 3 250 0.9 200 0.8 177× 164× 164× 0.7 150 146× 146× LSTM:0.8703 0.6 CR BTT-LSTM:0.8892 100 Top-1 Accuracy 0.5 CP-LSTM:0.8892 TK2-LSTM:0.75 0.4 TR-LSTM:0.9209 50 TT-LSTM:0.9019 0.3 0 25 50 75 100 125 150 0 Epoch BTT CP TK2 TR TT (a) Training Process in 150 epochs. (b) Compression Ratio. Figure 2: Experiments on UCF11. CR is short for Compression Ratio. Cifar10 Cifar100 Model Rank Params CR Accuracy Params CR Accuracy ResNet-32 - 0.46M 1× 0.9228 0.47M 1× 0.6804 BTT-ResNet-32 4 0.08M 6× 0.8589 0.08M 6× 0.5206 CP-ResNet-32 10 0.03M 18× 0.8802 0.03M 18× 0.4445 TK2-ResNet-32 10 0.05M 9× 0.8915 0.06M 9× 0.5398 TR-ResNet-32 10 0.09M 5× 0.9076 0.09M 5× 0.653 TT-ResNet-32 10 0.09M 5× 0.9020 0.10M 5× 0.6386 Table 2: Experiments on Cifar10/100. Params denotes the number of parameters. situations. Notably, TedNet is the first open-source package who supports Tensor Ring Decomposition. Besides, based on TD-CNNs and TD-Linears, TedNet has built some tensor decomposition based Deep Neural Networks, e.g. TD-ResNets, TD-RNNs. 3. Benchmark Until now, TDNs are mostly applied in computer vision field. Thus, aim- ing to validate performance of TedNet, we consider to conduct experiments on two datasets: • UCF11 Dataset: contain 1600 video clips of a resolution 320 × 240 and is divided into 11 action categories. Each category consists of 25 groups of videos, within more than 4 clips in one group. 4 • Cifar10/100 Dataset: Both CIFAR10 and CIFAR100 datasets consist of 50,000 train images and 10,000 test images with size as 32 × 32 × 3. CIFAR10 has 10 object classes and CIFAR100 has 100 categories. Listing 1: A usage sample of TR-LeNet-5. 1 import tednet.tnn.tensor r i n g as t r 2 3 import torch 4 import torch.nn.functional as F 5 import torch.optim as optim 6 7 from torchvision import datasets , transforms 8 9 # Define data loader 10 d a t a loader = torch. utils .data.DataLoader( 11 datasets .MNIST( './data ' , train=True, download=True, 12 transform=transforms .Compose([ 13 transforms .ToTensor() , 14 transforms.Normalize((0.1307,), (0.3081,)) 15 ])), 16 b a t c h size=128, shuffle=True) 17 18 # Define TR−LeNet5 19 model = tr.TRLeNet5(10, [6, 6, 6, 6]) 20 optimizer = optim.SGD(model.parameters(), lr=2e −2) 21 22 # Train model 23 model. train() 24 f o r epoch in range(20): 25 f o r data, target in data l o a d e r : 26 optimizer.zero g r a d ( ) 27 output = model(data) 28 loss = F.cross entropy(output, target) 29 loss .backward() 30 optimizer.step() On UCF11, our goal is to complete video classification tasks. The same setting as described in the literature [3], we first extract feature of dimension 2048 from each frame of a video by Inception-V3 [12]. Then throw these features as step inputs into TD-LSTMs. Results are shown in Figure 2. Almost every tensor decomposition model can achieve better accuracy except Tucker-2. As for Cifar10/100, we construct the image classification task on them. Through applying TD-ResNet-32, we can validate performance of all the tensor decomposition in TedNet. Results are demonstrated in Table 2. Unlike the results of UCF11, tensor decomposition can lead to a negligible loss in contrast to the original accuracy. 5 4. Installation and Illustrative Examples There are two ways to install TedNet. For the sake that the source code of TedNet is submitted to GitHub, it is feasible to install from the downloaded code by command python setup.py install. Compared with aforemen- tioned fussy way, another one, the recommended way is to install TedNet trough PyPI 1 by command pip install tednet. After installation, all ten- sor decomposition models of TedNet can be used. An example of Tensor Ring is shown in Listing 1. Tensor ring decompo- sition can be used by import module tednet.tnn.tensor ring. The usage of other decomposition is the same and more details can be found in the Document 2.
Recommended publications
  • Intro to Tensorflow 2.0 MBL, August 2019
    Intro to TensorFlow 2.0 MBL, August 2019 Josh Gordon (@random_forests) 1 Agenda 1 of 2 Exercises ● Fashion MNIST with dense layers ● CIFAR-10 with convolutional layers Concepts (as many as we can intro in this short time) ● Gradient descent, dense layers, loss, softmax, convolution Games ● QuickDraw Agenda 2 of 2 Walkthroughs and new tutorials ● Deep Dream and Style Transfer ● Time series forecasting Games ● Sketch RNN Learning more ● Book recommendations Deep Learning is representation learning Image link Image link Latest tutorials and guides tensorflow.org/beta News and updates medium.com/tensorflow twitter.com/tensorflow Demo PoseNet and BodyPix bit.ly/pose-net bit.ly/body-pix TensorFlow for JavaScript, Swift, Android, and iOS tensorflow.org/js tensorflow.org/swift tensorflow.org/lite Minimal MNIST in TF 2.0 A linear model, neural network, and deep neural network - then a short exercise. bit.ly/mnist-seq ... ... ... Softmax model = Sequential() model.add(Dense(256, activation='relu',input_shape=(784,))) model.add(Dense(128, activation='relu')) model.add(Dense(10, activation='softmax')) Linear model Neural network Deep neural network ... ... Softmax activation After training, select all the weights connected to this output. model.layers[0].get_weights() # Your code here # Select the weights for a single output # ... img = weights.reshape(28,28) plt.imshow(img, cmap = plt.get_cmap('seismic')) ... ... Softmax activation After training, select all the weights connected to this output. Exercise 1 (option #1) Exercise: bit.ly/mnist-seq Reference: tensorflow.org/beta/tutorials/keras/basic_classification TODO: Add a validation set. Add code to plot loss vs epochs (next slide). Exercise 1 (option #2) bit.ly/ijcav_adv Answers: next slide.
    [Show full text]
  • Keras2c: a Library for Converting Keras Neural Networks to Real-Time Compatible C
    Keras2c: A library for converting Keras neural networks to real-time compatible C Rory Conlina,∗, Keith Ericksonb, Joeseph Abbatec, Egemen Kolemena,b,∗ aDepartment of Mechanical and Aerospace Engineering, Princeton University, Princeton NJ 08544, USA bPrinceton Plasma Physics Laboratory, Princeton NJ 08544, USA cDepartment of Astrophysical Sciences at Princeton University, Princeton NJ 08544, USA Abstract With the growth of machine learning models and neural networks in mea- surement and control systems comes the need to deploy these models in a way that is compatible with existing systems. Existing options for deploying neural networks either introduce very high latency, require expensive and time con- suming work to integrate into existing code bases, or only support a very lim- ited subset of model types. We have therefore developed a new method called Keras2c, which is a simple library for converting Keras/TensorFlow neural net- work models into real-time compatible C code. It supports a wide range of Keras layers and model types including multidimensional convolutions, recurrent lay- ers, multi-input/output models, and shared layers. Keras2c re-implements the core components of Keras/TensorFlow required for predictive forward passes through neural networks in pure C, relying only on standard library functions considered safe for real-time use. The core functionality consists of ∼ 1500 lines of code, making it lightweight and easy to integrate into existing codebases. Keras2c has been successfully tested in experiments and is currently in use on the plasma control system at the DIII-D National Fusion Facility at General Atomics in San Diego. 1. Motivation TensorFlow[1] is one of the most popular libraries for developing and training neural networks.
    [Show full text]
  • Automated Elastic Pipelining for Distributed Training of Transformers
    PipeTransformer: Automated Elastic Pipelining for Distributed Training of Transformers Chaoyang He 1 Shen Li 2 Mahdi Soltanolkotabi 1 Salman Avestimehr 1 Abstract the-art convolutional networks ResNet-152 (He et al., 2016) and EfficientNet (Tan & Le, 2019). To tackle the growth in The size of Transformer models is growing at an model sizes, researchers have proposed various distributed unprecedented rate. It has taken less than one training techniques, including parameter servers (Li et al., year to reach trillion-level parameters since the 2014; Jiang et al., 2020; Kim et al., 2019), pipeline paral- release of GPT-3 (175B). Training such models lel (Huang et al., 2019; Park et al., 2020; Narayanan et al., requires both substantial engineering efforts and 2019), intra-layer parallel (Lepikhin et al., 2020; Shazeer enormous computing resources, which are luxu- et al., 2018; Shoeybi et al., 2019), and zero redundancy data ries most research teams cannot afford. In this parallel (Rajbhandari et al., 2019). paper, we propose PipeTransformer, which leverages automated elastic pipelining for effi- T0 (0% trained) T1 (35% trained) T2 (75% trained) T3 (100% trained) cient distributed training of Transformer models. In PipeTransformer, we design an adaptive on the fly freeze algorithm that can identify and freeze some layers gradually during training, and an elastic pipelining system that can dynamically Layer (end of training) Layer (end of training) Layer (end of training) Layer (end of training) Similarity score allocate resources to train the remaining active layers. More specifically, PipeTransformer automatically excludes frozen layers from the Figure 1. Interpretable Freeze Training: DNNs converge bottom pipeline, packs active layers into fewer GPUs, up (Results on CIFAR10 using ResNet).
    [Show full text]
  • Tensorflow, Theano, Keras, Torch, Caffe Vicky Kalogeiton, Stéphane Lathuilière, Pauline Luc, Thomas Lucas, Konstantin Shmelkov Introduction
    TensorFlow, Theano, Keras, Torch, Caffe Vicky Kalogeiton, Stéphane Lathuilière, Pauline Luc, Thomas Lucas, Konstantin Shmelkov Introduction TensorFlow Google Brain, 2015 (rewritten DistBelief) Theano University of Montréal, 2009 Keras François Chollet, 2015 (now at Google) Torch Facebook AI Research, Twitter, Google DeepMind Caffe Berkeley Vision and Learning Center (BVLC), 2013 Outline 1. Introduction of each framework a. TensorFlow b. Theano c. Keras d. Torch e. Caffe 2. Further comparison a. Code + models b. Community and documentation c. Performance d. Model deployment e. Extra features 3. Which framework to choose when ..? Introduction of each framework TensorFlow architecture 1) Low-level core (C++/CUDA) 2) Simple Python API to define the computational graph 3) High-level API (TF-Learn, TF-Slim, soon Keras…) TensorFlow computational graph - auto-differentiation! - easy multi-GPU/multi-node - native C++ multithreading - device-efficient implementation for most ops - whole pipeline in the graph: data loading, preprocessing, prefetching... TensorBoard TensorFlow development + bleeding edge (GitHub yay!) + division in core and contrib => very quick merging of new hotness + a lot of new related API: CRF, BayesFlow, SparseTensor, audio IO, CTC, seq2seq + so it can easily handle images, videos, audio, text... + if you really need a new native op, you can load a dynamic lib - sometimes contrib stuff disappears or moves - recently introduced bells and whistles are barely documented Presentation of Theano: - Maintained by Montréal University group. - Pioneered the use of a computational graph. - General machine learning tool -> Use of Lasagne and Keras. - Very popular in the research community, but not elsewhere. Falling behind. What is it like to start using Theano? - Read tutorials until you no longer can, then keep going.
    [Show full text]
  • Introduction to Deep Learning Framework 1. Introduction 1.1
    Introduction to Deep Learning Framework 1. Introduction 1.1. Commonly used frameworks The most commonly used frameworks for deep learning include Pytorch, Tensorflow, Keras, caffe, Apache MXnet, etc. PyTorch: open source machine learning library; developed by Facebook AI Rsearch Lab; based on the Torch library; supports Python and C++ interfaces. Tensorflow: open source software library dataflow and differentiable programming; developed by Google brain team; provides stable Python & C APIs. Keras: an open-source neural-network library written in Python; conceived to be an interface; capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. Caffe: open source under BSD licence; developed at University of California, Berkeley; written in C++ with a Python interface. Apache MXnet: an open-source deep learning software framework; supports a flexible programming model and multiple programming languages (including C++, Python, Java, Julia, Matlab, JavaScript, Go, R, Scala, Perl, and Wolfram Language.) 1.2. Pytorch 1.2.1 Data Tensor: the major computation unit in PyTorch. Tensor could be viewed as the extension of vector (one-dimensional) and matrix (two-dimensional), which could be defined with any dimension. Variable: a wrapper of tensor, which includes creator, value of variable (tensor), and gradient. This is the core of the automatic derivation in Pytorch, as it has the information of both the value and the creator, which is very important for current backward process. Parameter: a subset of variable 1.2.2. Functions: NNModules: NNModules (torch.nn) is a combination of parameters and functions, and could be interpreted as layers. There some common modules such as convolution layers, linear layers, pooling layers, dropout layers, etc.
    [Show full text]
  • Zero-Shot Text-To-Image Generation
    Zero-Shot Text-to-Image Generation Aditya Ramesh 1 Mikhail Pavlov 1 Gabriel Goh 1 Scott Gray 1 Chelsea Voss 1 Alec Radford 1 Mark Chen 1 Ilya Sutskever 1 Abstract Text-to-image generation has traditionally fo- cused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part la- bels or segmentation masks supplied during train- ing. We describe a simple approach for this task based on a transformer that autoregressively mod- els the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific mod- els when evaluated in a zero-shot fashion. Figure 1. Comparison of original images (top) and reconstructions from the discrete VAE (bottom). The encoder downsamples the 1. Introduction spatial resolution by a factor of 8. While details (e.g., the texture of Modern machine learning approaches to text to image syn- the cat’s fur, the writing on the storefront, and the thin lines in the thesis started with the work of Mansimov et al.(2015), illustration) are sometimes lost or distorted, the main features of the image are still typically recognizable. We use a large vocabulary who showed that the DRAW Gregor et al.(2015) generative size of 8192 to mitigate the loss of information. model, when extended to condition on image captions, could also generate novel visual scenes. Reed et al.(2016b) later demonstrated that using a generative adversarial network tioning model pretrained on MS-COCO.
    [Show full text]
  • Setting up Your Environment for the TF Developer Certificate Exam
    Set up your environment to take the TensorFlow Developer Ceicate Exam Questions? Email [email protected]. Last Updated: June 23, 2021 This document describes how to get your environment ready to take the TensorFlow Developer Ceicate exam. It does not discuss what the exam is or how to take it. For more information about the TensorFlow Developer Ceicate program, please go to tensolow.org/ceicate. Contents Before you begin Refund policy Install Python 3.8 Install PyCharm Get your environment ready for the exam What libraries will the exam infrastructure install? Make sure that PyCharm isn't subject to le-loading controls Windows and Linux Users: Check your GPU drivers Mac Users: Ensure you have Python 3.8 All Users: Create a Test Viual Environment that uses TensorFlow in PyCharm Create a new PyCharm project Install TensorFlow and related packages Check the suppoed version of Python in PyCharm Practice training TensorFlow models Setup your environment for the TensorFlow Developer Certificate exam 1 FAQs How do I sta the exam? What version of TensorFlow is used in the exam? Is there a minimum hardware requirement? Where is the candidate handbook? Before you begin The TensorFlow ceicate exam runs inside PyCharm. The exam uses TensorFlow 2.5.0. You must use Python 3.8 to ensure accurate grading of your models. The exam has been tested with Python 3.8.0 and TensorFlow 2.5.0. Before you sta the exam, make sure your environment is ready: ❏ Make sure you have Python 3.8 installed on your computer. ❏ Check that your system meets the installation requirements for PyCharm here.
    [Show full text]
  • Tensorflow and Keras Installation Steps for Deep Learning Applications in Rstudio Ricardo Dalagnol1 and Fabien Wagner 1. Install
    Version 1.0 – 03 July 2020 Tensorflow and Keras installation steps for Deep Learning applications in Rstudio Ricardo Dalagnol1 and Fabien Wagner 1. Install OSGEO (https://trac.osgeo.org/osgeo4w/) to have GDAL utilities and QGIS. Do not modify the installation path. 2. Install ImageMagick (https://imagemagick.org/index.php) for image visualization. 3. Install the latest Miniconda (https://docs.conda.io/en/latest/miniconda.html) 4. Install Visual Studio a. Install the 2019 or 2017 free version. The free version is called ‘Community’. b. During installation, select the box to include C++ development 5. Install the latest Nvidia driver for your GPU card from the official Nvidia site 6. Install CUDA NVIDIA a. I installed CUDA Toolkit 10.1 update2, to check if the installed or available NVIDIA driver is compatible with the CUDA version (check this in the cuDNN page https://docs.nvidia.com/deeplearning/sdk/cudnn-support- matrix/index.html) b. Downloaded from NVIDIA website, searched for CUDA NVIDIA download in google: https://developer.nvidia.com/cuda-10.1-download-archive-update2 (list of all archives here https://developer.nvidia.com/cuda-toolkit-archive) 7. Install cuDNN (deep neural network library) – I have installed cudnn-10.1- windows10-x64-v7.6.5.32 for CUDA 10.1, the last version https://docs.nvidia.com/deeplearning/sdk/cudnn-install/#install-windows a. To download you have to create/login an account with NVIDIA Developer b. Before installing, check the NVIDIA driver version if it is compatible c. To install on windows, follow the instructions at section 3.3 https://docs.nvidia.com/deeplearning/sdk/cudnn-install/#install-windows d.
    [Show full text]
  • Julia: a Modern Language for Modern ML
    Julia: A modern language for modern ML Dr. Viral Shah and Dr. Simon Byrne www.juliacomputing.com What we do: Modernize Technical Computing Today’s technical computing landscape: • Develop new learning algorithms • Run them in parallel on large datasets • Leverage accelerators like GPUs, Xeon Phis • Embed into intelligent products “Business as usual” will simply not do! General Micro-benchmarks: Julia performs almost as fast as C • 10X faster than Python • 100X faster than R & MATLAB Performance benchmark relative to C. A value of 1 means as fast as C. Lower values are better. A real application: Gillespie simulations in systems biology 745x faster than R • Gillespie simulations are used in the field of drug discovery. • Also used for simulations of epidemiological models to study disease propagation • Julia package (Gillespie.jl) is the state of the art in Gillespie simulations • https://github.com/openjournals/joss- papers/blob/master/joss.00042/10.21105.joss.00042.pdf Implementation Time per simulation (ms) R (GillespieSSA) 894.25 R (handcoded) 1087.94 Rcpp (handcoded) 1.31 Julia (Gillespie.jl) 3.99 Julia (Gillespie.jl, passing object) 1.78 Julia (handcoded) 1.2 Those who convert ideas to products fastest will win Computer Quants develop Scientists prepare algorithms The last 25 years for production (Python, R, SAS, DEPLOY (C++, C#, Java) Matlab) Quants and Computer Compress the Scientists DEPLOY innovation cycle collaborate on one platform - JULIA with Julia Julia offers competitive advantages to its users Julia is poised to become one of the Thank you for Julia. Yo u ' v e k i n d l ed leading tools deployed by developers serious excitement.
    [Show full text]
  • Real-Time Object Detection for Autonomous Vehicles Using Deep Learning
    IT 19 007 Examensarbete 30 hp Juni 2019 Real-time object detection for autonomous vehicles using deep learning Roger Kalliomäki Institutionen för informationsteknologi Department of Information Technology Abstract Real-time object detection for autonomous vehicles using deep learning Roger Kalliomäki Teknisk- naturvetenskaplig fakultet UTH-enheten Self-driving systems are commonly categorized into three subsystems: perception, planning, and control. In this thesis, the perception problem is studied in the context Besöksadress: of real-time object detection for autonomous vehicles. The problem is studied by Ångströmlaboratoriet Lägerhyddsvägen 1 implementing a cutting-edge real-time object detection deep neural network called Hus 4, Plan 0 Single Shot MultiBox Detector which is trained and evaluated on both real and virtual driving-scene data. Postadress: Box 536 751 21 Uppsala The results show that modern real-time capable object detection networks achieve their fast performance at the expense of detection rate and accuracy. The Single Shot Telefon: MultiBox Detector network is capable of processing images at over fifty frames per 018 – 471 30 03 second, but scored a relatively low mean average precision score on a diverse driving- Telefax: scene dataset provided by Berkeley University. Further development in both 018 – 471 30 00 hardware and software technologies will presumably result in a better trade-off between run-time and detection rate. However, as the technologies stand today, Hemsida: general real-time object detection networks do not seem to be suitable for high http://www.teknat.uu.se/student precision tasks, such as visual perception for autonomous vehicles. Additionally, a comparison is made between two versions of the Single Shot MultiBox Detector network, one trained on a virtual driving-scene dataset from Ford Center for Autonomous Vehicles, and one trained on a subset of the earlier used Berkeley dataset.
    [Show full text]
  • TRAINING NEURAL NETWORKS with TENSOR CORES Dusan Stosic, NVIDIA Agenda
    TRAINING NEURAL NETWORKS WITH TENSOR CORES Dusan Stosic, NVIDIA Agenda A100 Tensor Cores and Tensor Float 32 (TF32) Mixed Precision Tensor Cores : Recap and New Advances Accuracy and Performance Considerations 2 MOTIVATION – COST OF DL TRAINING GPT-3 Vision tasks: ImageNet classification • 2012: AlexNet trained on 2 GPUs for 5-6 days • 2017: ResNeXt-101 trained on 8 GPUs for over 10 days T5 • 2019: NoisyStudent trained with ~1k TPUs for 7 days Language tasks: LM modeling RoBERTa • 2018: BERT trained on 64 GPUs for 4 days • Early-2020: T5 trained on 256 GPUs • Mid-2020: GPT-3 BERT What’s being done to reduce costs • Hardware accelerators like GPU Tensor Cores • Lower computational complexity w/ reduced precision or network compression (aka sparsity) 3 BASICS OF FLOATING-POINT PRECISION Standard way to represent real numbers on a computer • Double precision (FP64), single precision (FP32), half precision (FP16/BF16) Cannot store numbers with infinite precision, trade-off between range and precision • Represent values at widely different magnitudes (range) o Different tensors (weights, activation, and gradients) when training a network • Provide same relative accuracy at all magnitudes (precision) o Network weight magnitudes are typically O(1) o Activations can have orders of magnitude larger values How floating-point numbers work • exponent: determines the range of values o scientific notation in binary (base of 2) • fraction (or mantissa): determines the relative precision between values mantissa o (2^mantissa) samples between powers of
    [Show full text]
  • Introduction to Keras Tensorflow
    Introduction to Keras TensorFlow Marco Rorro [email protected] CINECA – SCAI SuperComputing Applications and Innovation Department 1/33 Table of Contents Introduction Keras Distributed Deep Learning 2/33 Introduction Keras Distributed Deep Learning 3/33 I computations are expressed as stateful data-flow graphs I automatic differentiation capabilities I optimization algorithms: gradient and proximal gradient based I code portability (CPUs, GPUs, on desktop, server, or mobile computing platforms) I Python interface is the preferred one (Java, C and Go also exist) I installation through: pip, Docker, Anaconda, from sources I Apache 2.0 open-source license TensorFlow I Google Brain’s second generation machine learning system 4/33 I automatic differentiation capabilities I optimization algorithms: gradient and proximal gradient based I code portability (CPUs, GPUs, on desktop, server, or mobile computing platforms) I Python interface is the preferred one (Java, C and Go also exist) I installation through: pip, Docker, Anaconda, from sources I Apache 2.0 open-source license TensorFlow I Google Brain’s second generation machine learning system I computations are expressed as stateful data-flow graphs 4/33 I optimization algorithms: gradient and proximal gradient based I code portability (CPUs, GPUs, on desktop, server, or mobile computing platforms) I Python interface is the preferred one (Java, C and Go also exist) I installation through: pip, Docker, Anaconda, from sources I Apache 2.0 open-source license TensorFlow I Google Brain’s second generation
    [Show full text]