Automated Elastic Pipelining for Distributed Training of Transformers

Total Page:16

File Type:pdf, Size:1020Kb

Automated Elastic Pipelining for Distributed Training of Transformers PipeTransformer: Automated Elastic Pipelining for Distributed Training of Transformers Chaoyang He 1 Shen Li 2 Mahdi Soltanolkotabi 1 Salman Avestimehr 1 Abstract the-art convolutional networks ResNet-152 (He et al., 2016) and EfficientNet (Tan & Le, 2019). To tackle the growth in The size of Transformer models is growing at an model sizes, researchers have proposed various distributed unprecedented rate. It has taken less than one training techniques, including parameter servers (Li et al., year to reach trillion-level parameters since the 2014; Jiang et al., 2020; Kim et al., 2019), pipeline paral- release of GPT-3 (175B). Training such models lel (Huang et al., 2019; Park et al., 2020; Narayanan et al., requires both substantial engineering efforts and 2019), intra-layer parallel (Lepikhin et al., 2020; Shazeer enormous computing resources, which are luxu- et al., 2018; Shoeybi et al., 2019), and zero redundancy data ries most research teams cannot afford. In this parallel (Rajbhandari et al., 2019). paper, we propose PipeTransformer, which leverages automated elastic pipelining for effi- T0 (0% trained) T1 (35% trained) T2 (75% trained) T3 (100% trained) cient distributed training of Transformer models. In PipeTransformer, we design an adaptive on the fly freeze algorithm that can identify and freeze some layers gradually during training, and an elastic pipelining system that can dynamically Layer (end of training) Layer (end of training) Layer (end of training) Layer (end of training) Similarity score allocate resources to train the remaining active layers. More specifically, PipeTransformer automatically excludes frozen layers from the Figure 1. Interpretable Freeze Training: DNNs converge bottom pipeline, packs active layers into fewer GPUs, up (Results on CIFAR10 using ResNet). Each pane shows layer- and forks more replicas to increase data-parallel by-layer similarity using SVCCA (Raghu et al., 2017). PipeTransformer width. We evaluate us- Existing distributed training solutions, however, only study ing Vision Transformer (ViT) on ImageNet and scenarios where all model weights are required to be opti- BERT on SQuAD and GLUE datasets. Our results mized throughout the training (i.e., computation and com- show that compared to the state-of-the-art base- munication overhead remains relatively static over different PipeTransformer line, attains up to 2:83- iterations). Recent works on freeze training (Raghu et al., fold speedup without losing accuracy. We also 2017; Morcos et al., 2018) suggest that parameters in neural provide various performance analyses for a more networks usually converge from the bottom-up (i.e., not all comprehensive understanding of our algorithmic layers need to be trained all the way through training). Fig- and system-wise design. Finally, we have modu- ure1 shows an example of how parameter values gradually larized our training system with flexible APIs and stabilize during training in this approach. This observation made the source code publicly available. motivates us to utilize freeze training for distributed training arXiv:submit/3591484 [cs.LG] 5 Feb 2021 of Transformer models to accelerate training by dynamically allocating resources to focus on a shrinking set of active 1. Introduction layers. Such a layer freezing strategy is especially pertinent Large Transformer models (Brown et al., 2020; Lepikhin to pipeline parallelism, as excluding consecutive bottom et al., 2020) have powered accuracy breakthroughs in both layers from the pipeline can reduce computation, memory, natural language processing and computer vision. GPT-3 hit and communication overhead. a new record high accuracy for nearly all NLP tasks. Vision In this paper, we propose PipeTransformer, an elastic Transformer (ViT) (Dosovitskiy et al., 2020) also achieved pipelining training acceleration framework that automati- 89% top-1 accuracy in ImageNet, outperforming state-of- cally reacts to frozen layers by dynamically transforming the scope of the pipelined model and the number of pipeline 1University of Southern California 2Facebook AI. Correspon- dence to: Chaoyang He <[email protected]>. replicas. To the best of our knowledge, this is the first paper that studies layer freezing in the context of both pipeline and Preprint. Under Review. data-parallel training. Figure2 demonstrates the benefits PipeTransformer: Automated Elastic Pipelining for Distributed Training of Transformers “Hi, PipeTransformer, Transformer Models for NLP (BERT) and CV (ViT) “Pipeline Transformation for Transformer Models” What does your name mean?” T0 1. Freeze Algorithm 4. AutoCache: Cross-process caching T1 T2 T3 Cache Active Layers Frozen Layers Active Layers Frozen Layers Active Layers Fronzen Layers Pipeline 0 (server 0) FP 0 BP 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 DP Pipeline 1 (server 1) DP DP DP DP DP DP DP FP 8 BP 9 10 11 12 13 14 15 8 9 10 11 12 13 14 15 8 9 10 11 12 13 14 15 8 9 10 11 12 13 14 15 2. AutoPipe: Elastic pipelining 3. AutoPipe: Spawning More Pipeline Replicas Figure 2. The process of PipeTransformer’s automated and elastic pipelining to accelerate distributed training of Transformer models of such a combination. First, by excluding frozen layers across existing and new data-parallel processes and auto- from the pipeline, the same model can be packed into fewer matically replaces stale caches during transitions (Section GPUs, leading to both fewer cross-GPU communications 3.4). and smaller pipeline bubbles. Second, after packing the Overall, PipeTransformer combines the Freeze model into fewer GPUs, the same cluster can accommodate Algorithm, AutoPipe, AutoDP and AutoCache mod- more pipeline replicas, increasing the width of data paral- ules to provide a significant training speedup. We eval- lelism. More importantly, the speedups acquired from these uate PipeTransformer using Vision Transformer (ViT) two benefits are multiplicative rather than additive, further on ImageNet and BERT on GLUE and SQuAD datasets. accelerating the training. Our results show that PipeTransformer attains up to The design of PipeTransformer faces four major chal- 2:83-fold speedup without losing accuracy. We also provide lenges. First, the freeze algorithm must make on the fly and various performance analyses for a more comprehensive adaptive freezing decisions; however, existing work (Raghu understanding of our algorithmic and system-wise design. et al., 2017) only provides a posterior analysis tool. Sec- Finally, we have also developed open-source flexible APIs ond, the efficiency of pipeline re-partitioning results is for PipeTransformer which offer a clean separation influenced by multiple factors, including partition gran- among the freeze algorithm, model definitions, and train- ularity, cross-partition activation size, and the chunking ing accelerations, allowing for transferability to other algo- (the number of micro-batches) in mini-batches, which re- rithms that require similar freezing strategies. The source quire reasoning and searching in a large solution space. code is made publicly available. Third, to dynamically introduce additional pipeline repli- cas, PipeTransformer must overcome the static nature of 2. Overview collective communications and avoid potentially complex cross-process messaging protocols when onboarding new 2.1. Background and Problem Setting processes (one pipeline is handled by one process). Finally, Suppose we aim to train a massive model in a distributed caching can save time for repeated forward propagation training system where the hybrid of pipelined model paral- of frozen layers, but it must be shared between existing lelism and data parallelism is used to target scenarios where pipelines and newly added ones, as the system cannot afford either the memory of a single GPU device cannot hold the to create and warm up a dedicated cache for each replica. model, or if loaded, the batch size is small enough to avoid PipeTransformer is designed with four core building running out of memory. More specifically, we define our blocks to address the aforementioned challenges. First, we settings as follows: design a tunable and adaptive algorithm to generate signals Training task and model definition. We train Transformer that guide the selection of layers to freeze over different models (e.g., Vision Transformer (Dosovitskiy et al., 2020), iterations (Section 3.1). Once triggered by these signals, our BERT (Devlin et al., 2018)) on large-scale image or text elastic pipelining module AutoPipe, then packs the remain- datasets. The Transformer model F has L layers, in which ing active layers into fewer GPUs by taking both activation the ith layer is composed of a forward computation function sizes and variances of workloads across heterogeneous par- f and a corresponding set of parameters, w . With this defi- titions (frozen layers and active layers) into account. It then i i nition, the overall model is F = f (w )◦:::◦f (w ). splits a mini-batch into an optimal number of micro-batches 0 0 L−1 L−1 The model size is S, and the batch size is set to N . based on prior profiling results for different pipeline lengths bs (Section 3.2). Our next module, AutoDP, spawns additional Training infrastructure. Assume the training infrastruc- pipeline replicas to occupy freed-up GPUs and maintains hi- ture contains a GPU cluster that has N GPU servers (i.e. erarchical communication process groups to attain dynamic nodes). Each node has I GPUs. Our cluster is homoge- membership for collective communications (Section 3.3). neous, meaning that each GPU and server have the same Our final module, AutoCache, efficiently shares activations hardware configuration. Each GPU’s memory capacity is PipeTransformer: Automated Elastic Pipelining for Distributed Training of Transformers pipeline 0 at timestep 1 Pipeline 1 at timestep 1 MGPU. Servers are connected by a high bandwidth network interface such as InfiniBand interconnect. Dataset Dataset Pipeline parallelism. In each machine, we load a model pipeline 0 at timestep 0 add new pipelines F into a pipeline P which has K partitions (K also rep- resents the pipeline length).
Recommended publications
  • Backpropagation and Deep Learning in the Brain
    Backpropagation and Deep Learning in the Brain Simons Institute -- Computational Theories of the Brain 2018 Timothy Lillicrap DeepMind, UCL With: Sergey Bartunov, Adam Santoro, Jordan Guerguiev, Blake Richards, Luke Marris, Daniel Cownden, Colin Akerman, Douglas Tweed, Geoffrey Hinton The “credit assignment” problem The solution in artificial networks: backprop Credit assignment by backprop works well in practice and shows up in virtually all of the state-of-the-art supervised, unsupervised, and reinforcement learning algorithms. Why Isn’t Backprop “Biologically Plausible”? Why Isn’t Backprop “Biologically Plausible”? Neuroscience Evidence for Backprop in the Brain? A spectrum of credit assignment algorithms: A spectrum of credit assignment algorithms: A spectrum of credit assignment algorithms: How to convince a neuroscientist that the cortex is learning via [something like] backprop - To convince a machine learning researcher, an appeal to variance in gradient estimates might be enough. - But this is rarely enough to convince a neuroscientist. - So what lines of argument help? How to convince a neuroscientist that the cortex is learning via [something like] backprop - What do I mean by “something like backprop”?: - That learning is achieved across multiple layers by sending information from neurons closer to the output back to “earlier” layers to help compute their synaptic updates. How to convince a neuroscientist that the cortex is learning via [something like] backprop 1. Feedback connections in cortex are ubiquitous and modify the
    [Show full text]
  • Convolutional Neural Network Transfer Learning for Robust Face
    Convolutional Neural Network Transfer Learning for Robust Face Recognition in NAO Humanoid Robot Daniel Bussey1, Alex Glandon2, Lasitha Vidyaratne2, Mahbubul Alam2, Khan Iftekharuddin2 1Embry-Riddle Aeronautical University, 2Old Dominion University Background Research Approach Results • Artificial Neural Networks • Retrain AlexNet on the CASIA-WebFace dataset to configure the neural • VGG-Face Shows better results in every performance benchmark measured • Computing systems whose model architecture is inspired by biological networf for face recognition tasks. compared to AlexNet, although AlexNet is able to extract features from an neural networks [1]. image 800% faster than VGG-Face • Capable of improving task performance without task-specific • Acquire an input image using NAO’s camera or a high resolution camera to programming. run through the convolutional neural network. • Resolution of the input image does not have a statistically significant impact • Show excellent performance at classification based tasks such as face on the performance of VGG-Face and AlexNet. recognition [2], text recognition [3], and natural language processing • Extract the features of the input image using the neural networks AlexNet [4]. and VGG-Face • AlexNet’s performance decreases with respect to distance from the camera where VGG-Face shows no performance loss. • Deep Learning • Compare the features of the input image to the features of each image in the • A subfield of machine learning that focuses on algorithms inspired by people database. • Both frameworks show excellent performance when eliminating false the function and structure of the brain called artificial neural networks. positives. The method used to allow computers to learn through a process called • Determine whether a match is detected or if the input image is a photo of a training.
    [Show full text]
  • Introduction to Deep Learning Framework 1. Introduction 1.1
    Introduction to Deep Learning Framework 1. Introduction 1.1. Commonly used frameworks The most commonly used frameworks for deep learning include Pytorch, Tensorflow, Keras, caffe, Apache MXnet, etc. PyTorch: open source machine learning library; developed by Facebook AI Rsearch Lab; based on the Torch library; supports Python and C++ interfaces. Tensorflow: open source software library dataflow and differentiable programming; developed by Google brain team; provides stable Python & C APIs. Keras: an open-source neural-network library written in Python; conceived to be an interface; capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. Caffe: open source under BSD licence; developed at University of California, Berkeley; written in C++ with a Python interface. Apache MXnet: an open-source deep learning software framework; supports a flexible programming model and multiple programming languages (including C++, Python, Java, Julia, Matlab, JavaScript, Go, R, Scala, Perl, and Wolfram Language.) 1.2. Pytorch 1.2.1 Data Tensor: the major computation unit in PyTorch. Tensor could be viewed as the extension of vector (one-dimensional) and matrix (two-dimensional), which could be defined with any dimension. Variable: a wrapper of tensor, which includes creator, value of variable (tensor), and gradient. This is the core of the automatic derivation in Pytorch, as it has the information of both the value and the creator, which is very important for current backward process. Parameter: a subset of variable 1.2.2. Functions: NNModules: NNModules (torch.nn) is a combination of parameters and functions, and could be interpreted as layers. There some common modules such as convolution layers, linear layers, pooling layers, dropout layers, etc.
    [Show full text]
  • Zero-Shot Text-To-Image Generation
    Zero-Shot Text-to-Image Generation Aditya Ramesh 1 Mikhail Pavlov 1 Gabriel Goh 1 Scott Gray 1 Chelsea Voss 1 Alec Radford 1 Mark Chen 1 Ilya Sutskever 1 Abstract Text-to-image generation has traditionally fo- cused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part la- bels or segmentation masks supplied during train- ing. We describe a simple approach for this task based on a transformer that autoregressively mod- els the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific mod- els when evaluated in a zero-shot fashion. Figure 1. Comparison of original images (top) and reconstructions from the discrete VAE (bottom). The encoder downsamples the 1. Introduction spatial resolution by a factor of 8. While details (e.g., the texture of Modern machine learning approaches to text to image syn- the cat’s fur, the writing on the storefront, and the thin lines in the thesis started with the work of Mansimov et al.(2015), illustration) are sometimes lost or distorted, the main features of the image are still typically recognizable. We use a large vocabulary who showed that the DRAW Gregor et al.(2015) generative size of 8192 to mitigate the loss of information. model, when extended to condition on image captions, could also generate novel visual scenes. Reed et al.(2016b) later demonstrated that using a generative adversarial network tioning model pretrained on MS-COCO.
    [Show full text]
  • Tiny Imagenet Challenge
    Tiny ImageNet Challenge Jiayu Wu Qixiang Zhang Guoxi Xu Stanford University Stanford University Stanford University [email protected] [email protected] [email protected] Abstract Our best model, a fine-tuned Inception-ResNet, achieves a top-1 error rate of 43.10% on test dataset. Moreover, we We present image classification systems using Residual implemented an object localization network based on a Network(ResNet), Inception-Resnet and Very Deep Convo- RNN with LSTM [7] cells, which achieves precise results. lutional Networks(VGGNet) architectures. We apply data In the Experiments and Evaluations section, We will augmentation, dropout and other regularization techniques present thorough analysis on the results, including per-class to prevent over-fitting of our models. What’s more, we error analysis, intermediate output distribution, the impact present error analysis based on per-class accuracy. We of initialization, etc. also explore impact of initialization methods, weight decay and network depth on system performance. Moreover, visu- 2. Related Work alization of intermediate outputs and convolutional filters are shown. Besides, we complete an extra object localiza- Deep convolutional neural networks have enabled the tion system base upon a combination of Recurrent Neural field of image recognition to advance in an unprecedented Network(RNN) and Long Short Term Memroy(LSTM) units. pace over the past decade. Our best classification model achieves a top-1 test error [10] introduces AlexNet, which has 60 million param- rate of 43.10% on the Tiny ImageNet dataset, and our best eters and 650,000 neurons. The model consists of five localization model can localize with high accuracy more convolutional layers, and some of them are followed by than 1 objects, given training images with 1 object labeled.
    [Show full text]
  • Deep Learning Is Robust to Massive Label Noise
    Deep Learning is Robust to Massive Label Noise David Rolnick * 1 Andreas Veit * 2 Serge Belongie 2 Nir Shavit 3 Abstract Thus, annotation can be expensive and, for tasks requiring expert knowledge, may simply be unattainable at scale. Deep neural networks trained on large supervised datasets have led to impressive results in image To address this limitation, other training paradigms have classification and other tasks. However, well- been investigated to alleviate the need for expensive an- annotated datasets can be time-consuming and notations, such as unsupervised learning (Le, 2013), self- expensive to collect, lending increased interest to supervised learning (Pinto et al., 2016; Wang & Gupta, larger but noisy datasets that are more easily ob- 2015) and learning from noisy annotations (Joulin et al., tained. In this paper, we show that deep neural net- 2016; Natarajan et al., 2013; Veit et al., 2017). Very large works are capable of generalizing from training datasets (e.g., Krasin et al.(2016); Thomee et al.(2016)) data for which true labels are massively outnum- can often be obtained, for example from web sources, with bered by incorrect labels. We demonstrate remark- partial or unreliable annotation. This can allow neural net- ably high test performance after training on cor- works to be trained on a much wider variety of tasks or rupted data from MNIST, CIFAR, and ImageNet. classes and with less manual effort. The good performance For example, on MNIST we obtain test accuracy obtained from these large, noisy datasets indicates that deep above 90 percent even after each clean training learning approaches can tolerate modest amounts of noise example has been diluted with 100 randomly- in the training set.
    [Show full text]
  • Real-Time Object Detection for Autonomous Vehicles Using Deep Learning
    IT 19 007 Examensarbete 30 hp Juni 2019 Real-time object detection for autonomous vehicles using deep learning Roger Kalliomäki Institutionen för informationsteknologi Department of Information Technology Abstract Real-time object detection for autonomous vehicles using deep learning Roger Kalliomäki Teknisk- naturvetenskaplig fakultet UTH-enheten Self-driving systems are commonly categorized into three subsystems: perception, planning, and control. In this thesis, the perception problem is studied in the context Besöksadress: of real-time object detection for autonomous vehicles. The problem is studied by Ångströmlaboratoriet Lägerhyddsvägen 1 implementing a cutting-edge real-time object detection deep neural network called Hus 4, Plan 0 Single Shot MultiBox Detector which is trained and evaluated on both real and virtual driving-scene data. Postadress: Box 536 751 21 Uppsala The results show that modern real-time capable object detection networks achieve their fast performance at the expense of detection rate and accuracy. The Single Shot Telefon: MultiBox Detector network is capable of processing images at over fifty frames per 018 – 471 30 03 second, but scored a relatively low mean average precision score on a diverse driving- Telefax: scene dataset provided by Berkeley University. Further development in both 018 – 471 30 00 hardware and software technologies will presumably result in a better trade-off between run-time and detection rate. However, as the technologies stand today, Hemsida: general real-time object detection networks do not seem to be suitable for high http://www.teknat.uu.se/student precision tasks, such as visual perception for autonomous vehicles. Additionally, a comparison is made between two versions of the Single Shot MultiBox Detector network, one trained on a virtual driving-scene dataset from Ford Center for Autonomous Vehicles, and one trained on a subset of the earlier used Berkeley dataset.
    [Show full text]
  • TRAINING NEURAL NETWORKS with TENSOR CORES Dusan Stosic, NVIDIA Agenda
    TRAINING NEURAL NETWORKS WITH TENSOR CORES Dusan Stosic, NVIDIA Agenda A100 Tensor Cores and Tensor Float 32 (TF32) Mixed Precision Tensor Cores : Recap and New Advances Accuracy and Performance Considerations 2 MOTIVATION – COST OF DL TRAINING GPT-3 Vision tasks: ImageNet classification • 2012: AlexNet trained on 2 GPUs for 5-6 days • 2017: ResNeXt-101 trained on 8 GPUs for over 10 days T5 • 2019: NoisyStudent trained with ~1k TPUs for 7 days Language tasks: LM modeling RoBERTa • 2018: BERT trained on 64 GPUs for 4 days • Early-2020: T5 trained on 256 GPUs • Mid-2020: GPT-3 BERT What’s being done to reduce costs • Hardware accelerators like GPU Tensor Cores • Lower computational complexity w/ reduced precision or network compression (aka sparsity) 3 BASICS OF FLOATING-POINT PRECISION Standard way to represent real numbers on a computer • Double precision (FP64), single precision (FP32), half precision (FP16/BF16) Cannot store numbers with infinite precision, trade-off between range and precision • Represent values at widely different magnitudes (range) o Different tensors (weights, activation, and gradients) when training a network • Provide same relative accuracy at all magnitudes (precision) o Network weight magnitudes are typically O(1) o Activations can have orders of magnitude larger values How floating-point numbers work • exponent: determines the range of values o scientific notation in binary (base of 2) • fraction (or mantissa): determines the relative precision between values mantissa o (2^mantissa) samples between powers of
    [Show full text]
  • Cs231n Lecture 8 : Deep Learning Software
    Lecture 8: Deep Learning Software Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 8 - 1 April 27, 2017 Administrative - Project proposals were due Tuesday - We are assigning TAs to projects, stay tuned - We are grading A1 - A2 is due Thursday 5/4 - Remember to stop your instances when not in use - Only use GPU instances for the last notebook Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 8 -2 2 April 27, 2017 Last time Regularization: Dropout Transfer Learning Optimization: SGD+Momentum, FC-C Nesterov, RMSProp, Adam FC-4096 Reinitialize FC-4096 this and train MaxPool Conv-512 Conv-512 MaxPool Conv-512 Conv-512 MaxPool Freeze these Conv-256 Regularization: Add noise, then Conv-256 marginalize out MaxPool Conv-128 Train Conv-128 MaxPool Conv-64 Test Conv-64 Image Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 8 - 3 April 27, 2017 Today - CPU vs GPU - Deep Learning Frameworks - Caffe / Caffe2 - Theano / TensorFlow - Torch / PyTorch Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 8 -4 4 April 27, 2017 CPU vs GPU Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 8 - 5 April 27, 2017 My computer Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 8 -6 6 April 27, 2017 Spot the CPU! (central processing unit) This image is licensed under CC-BY 2.0 Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 8 -7 7 April 27, 2017 Spot the GPUs! (graphics processing unit) This image is in the public domain Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 8 -8 8 April 27, 2017 NVIDIA vs AMD Fei-Fei Li & Justin Johnson & Serena Yeung Lecture
    [Show full text]
  • Pytorch Is an Open Source Machine Learning Library for Python and Is Completely Based on Torch
    PyTorch i PyTorch About the Tutorial PyTorch is an open source machine learning library for Python and is completely based on Torch. It is primarily used for applications such as natural language processing. PyTorch is developed by Facebook's artificial-intelligence research group along with Uber's "Pyro" software for the concept of in-built probabilistic programming. Audience This tutorial has been prepared for python developers who focus on research and development with machine learning algorithms along with natural language processing system. The aim of this tutorial is to completely describe all concepts of PyTorch and real- world examples of the same. Prerequisites Before proceeding with this tutorial, you need knowledge of Python and Anaconda framework (commands used in Anaconda). Having knowledge of artificial intelligence concepts will be an added advantage. Copyright & Disclaimer Copyright 2019 by Tutorials Point (I) Pvt. Ltd. All the content and graphics published in this e-book are the property of Tutorials Point (I) Pvt. Ltd. The user of this e-book is prohibited to reuse, retain, copy, distribute or republish any contents or a part of contents of this e-book in any manner without written consent of the publisher. We strive to update the contents of our website and tutorials as timely and as precisely as possible, however, the contents may contain inaccuracies or errors. Tutorials Point (I) Pvt. Ltd. provides no guarantee regarding the accuracy, timeliness or completeness of our website or its contents including this tutorial. If you discover any errors on our website or in this tutorial, please notify us at [email protected] ii PyTorch Table of Contents About the Tutorial ..........................................................................................................................................
    [Show full text]
  • The History Began from Alexnet: a Comprehensive Survey on Deep Learning Approaches
    > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1 The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches Md Zahangir Alom1, Tarek M. Taha1, Chris Yakopcic1, Stefan Westberg1, Paheding Sidike2, Mst Shamima Nasrin1, Brian C Van Essen3, Abdul A S. Awwal3, and Vijayan K. Asari1 Abstract—In recent years, deep learning has garnered I. INTRODUCTION tremendous success in a variety of application domains. This new ince the 1950s, a small subset of Artificial Intelligence (AI), field of machine learning has been growing rapidly, and has been applied to most traditional application domains, as well as some S often called Machine Learning (ML), has revolutionized new areas that present more opportunities. Different methods several fields in the last few decades. Neural Networks have been proposed based on different categories of learning, (NN) are a subfield of ML, and it was this subfield that spawned including supervised, semi-supervised, and un-supervised Deep Learning (DL). Since its inception DL has been creating learning. Experimental results show state-of-the-art performance ever larger disruptions, showing outstanding success in almost using deep learning when compared to traditional machine every application domain. Fig. 1 shows, the taxonomy of AI. learning approaches in the fields of image processing, computer DL (using either deep architecture of learning or hierarchical vision, speech recognition, machine translation, art, medical learning approaches) is a class of ML developed largely from imaging, medical information processing, robotics and control, 2006 onward. Learning is a procedure consisting of estimating bio-informatics, natural language processing (NLP), cybersecurity, and many others.
    [Show full text]
  • Kernel Descriptors for Visual Recognition
    Kernel Descriptors for Visual Recognition Liefeng Bo University of Washington Seattle WA 98195, USA Xiaofeng Ren Dieter Fox Intel Labs Seattle University of Washington & Intel Labs Seattle Seattle WA 98105, USA Seattle WA 98195 & 98105, USA Abstract The design of low-level image features is critical for computer vision algorithms. Orientation histograms, such as those in SIFT [16] and HOG [3], are the most successful and popular features for visual object and scene recognition. We high- light the kernel view of orientation histograms, and show that they are equivalent to a certain type of match kernels over image patches. This novel view allows us to design a family of kernel descriptors which provide a unified and princi- pled framework to turn pixel attributes (gradient, color, local binary pattern, etc.) into compact patch-level features. In particular, we introduce three types of match kernels to measure similarities between image patches, and construct compact low-dimensional kernel descriptors from these match kernels using kernel princi- pal component analysis (KPCA) [23]. Kernel descriptors are easy to design and can turn any type of pixel attribute into patch-level features. They outperform carefully tuned and sophisticated features including SIFT and deep belief net- works. We report superior performance on standard image classification bench- marks: Scene-15, Caltech-101, CIFAR10 and CIFAR10-ImageNet. 1 Introduction Image representation (features) is arguably the most fundamental task in computer vision. The problem is highly challenging because images exhibit high variations, are highly structured, and lie in high dimensional spaces. In the past ten years, a large number of low-level features over images have been proposed.
    [Show full text]