A Quantitative Analysis of the Effect of Batch Normalization on Gradient Descent

Total Page:16

File Type:pdf, Size:1020Kb

A Quantitative Analysis of the Effect of Batch Normalization on Gradient Descent A Quantitative Analysis of the Effect of Batch Normalization on Gradient Descent Yongqiang Cai 1 Qianxiao Li 1 2 Zuowei Shen 1 Abstract effects of BN are attributed to the so-called “reduction of Despite its empirical success and recent theoreti- covariate shift”. However, it is unclear what this statement cal progress, there generally lacks a quantitative means in precise mathematical terms. analysis of the effect of batch normalization (BN) Although recent theoretical work have established cer- on the convergence and stability of gradient de- tain convergence properties of gradient descent with BN scent. In this paper, we provide such an analysis (BNGD) and its variants (Ma & Klabjan, 2017; Kohler et al., on the simple problem of ordinary least squares 2018; Arora et al., 2019), there generally lacks a quantita- (OLS), where the precise dynamical properties of tive comparison between the dynamics of the usual gradient gradient descent (GD) is completely known, thus descent (GD) and BNGD. In other words, a basic question allowing us to isolate and compare the additional that one could pose is: what quantitative changes does BN effects of BN. More precisely, we show that un- bring to the stability and convergence of gradient descent dy- like GD, gradient descent with BN (BNGD) con- namics? Or even more simply: why should one use BNGD verges for arbitrary learning rates for the weights, instead of GD? To date, a general mathematical answer to and the convergence remains linear under mild these questions remain elusive. This can be partly attributed conditions. Moreover, we quantify two differ- to the complexity of the optimization objectives that one ent sources of acceleration of BNGD over GD – typically applies BN to, such as those encountered in deep one due to over-parameterization which improves learning. In these cases, even a quantitative analysis of the the effective condition number and another due dynamics of GD itself is difficult, not to mention a precise having a large range of learning rates giving rise comparison between the two. to fast descent. These phenomena set BNGD apart from GD and could account for much of For this reason, it is desirable to formulate the simplest non- its robustness properties. These findings are con- trivial setting, on which one can concretely study the effect firmed quantitatively by numerical experiments, of batch normalization and answer the questions above in which further show that many of the uncovered a quantitative manner. This is the goal of the current pa- properties of BNGD in OLS are also observed per, where we focus on perhaps the simplest supervised qualitatively in more complex supervised learn- learning problem – ordinary least squares (OLS) regression ing problems. – and analyze precisely the effect of BNGD when applied to this problem. A primary reason for this choice is that the dynamics of GD in least-squares regression is completely 1. Introduction understood, thus allowing us to isolate and contrast the ad- ditional effects of batch normalization. Batch normalization (BN) is one of the most important tech- niques for training deep neural networks and has proven Our main findings can be summarized as follows extremely effective in avoiding gradient blowups during back-propagation and speeding up convergence. In its orig- 1. Unlike GD, BNGD converges for arbitrarily large learn- inal introduction (Ioffe & Szegedy, 2015), the desirable ing rates for the weights, and the convergence remains linear under mild conditions. 1Department of Mathematics, National University of Sin- gapore, Singapore 2Institute of High Performance Comput- 2. The asymptotic linear convergence of BNGD is faster ing, A*STAR, Singapore. Correspondence to: Yongqiang Cai than that of GD, and this can be attributed to the over- <[email protected]>, Qianxiao Li <[email protected]>, Zuowei Shen <[email protected]>. parameterization that BNGD introduces. Proceedings of the 36 th International Conference on Machine 3. Unlike GD, the convergence rate of BNGD is insen- Learning, Long Beach, California, PMLR 97, 2019. Copyright sitive to the choice of learning rates. The range of 2019 by the author(s). insensitivity can be characterized, and in particular it A Quantitative Analysis of the Effect of Batch Normalization on Gradient Descent increases with the dimensionality of the problem. tions (Theorem 3.4). This convergence result is stronger, but this is to be expected since we are considering a specific case. More importantly, we discuss concretely how BNGD Although these findings are established concretely only for offers advantages over GD instead of just matching its best- the OLS problem, we will show through numerical experi- case performance. For example, not only do we show that ments that some of them hold qualitatively, and sometimes convergence occurs for any learning rate, we also derive a even quantitatively for more general situations in deep learn- quantitative relationship between the learning rate and the ing. convergence rate, from which the robustness of BNGD on OLS can be explained (see Section3). 1.1. Related Work Batch normalization was originally introduced in Ioffe & 1.2. Organization Szegedy(2015) and subsequently studied in further detail Our paper is organized as follows. In Section2, we outline in Ioffe(2017). Since its introduction, it has become an the ordinary least squares (OLS) problem and present GD important practical tool to improve stability and efficiency and BNGD as alternative means to solve this problem. In of training deep neural networks (Bottou et al., 2018). Ini- Section3, we demonstrate and analyze the convergence of tial heuristic arguments attribute the desirable features of the BNGD for the OLS model, and in particular contrast the BN to concepts such as “covariate shift”, but alternative results with the behavior of GD, which is completely known explanations based on landscapes (Santurkar et al., 2018) for this model. We also discuss the important insights to and effective regularization (Bjorck et al., 2018) have been BNGD that these results provide us with. We then validate proposed. these findings on more general supervised learning problems Recent theoretical studies of BN include Ma & Klabjan in Section4. Finally, we conclude in Section5. (2017); Kohler et al.(2018); Arora et al.(2019). We now outline the main differences between them and the current 2. Background work. In Ma & Klabjan(2017), the authors proposed a variant of BN, the diminishing batch normalization (DBN) 2.1. Ordinary Least Squares and Gradient Descent algorithm and established its convergence to a stationary d Consider the simple linear regression model where x 2 R point of the loss function. In Kohler et al.(2018), the authors is a random input column vector and y is the corresponding also considered a BNGD variant by dynamically setting the output variable. Since batch normalization is applied for learning rates and using bisection to optimize the rescaling each feature separately, in order to gain key insights it is variables introduced by BN. It is shown that this variant of sufficient to consider the case y 2 R. A noisy linear rela- BNGD converges linearly for simplified models, including tionship is assumed between the dependent variable y and an OLS model and “learning halfspaces”. The primary dif- the independent variables x, i.e. y = xT w + noise where ference in the current work is that we do not dynamically d w 2 R is the vector of trainable parameters. Denote the modify the learning rates, and consider instead a constant following moments: learning rate, i.e. the original BNGD algorithm. This is an important distinction; While a decaying or dynamic learn- H := E[xxT ]; g := E[xy]; c := E[y2]: (1) ing rate is sometimes used in GD, in the case of BN it is critical to analyze the constant learning rate case, precisely To simplify the analysis, we assume the covariance matrix because one of the key practical advantages of BN is that a H of x is positive definite and the mean E[x] of x is zero. big learning rate can be used. Moreover, this allows us to The eigenvalues of H are denoted as λi(H); i = 1; 2; :::d;. isolate the influence of batch normalization itself, without Particularly, the maximum and minimum eigenvalue of H the potentially obfuscating effects a dynamic learning rate is denoted by λmax and λmin respectively. The condition number of H is defined as κ := λmax . Note that the positive schedule can introduce (e.g. see Eq. (10) and the discussion λmin that follows). As the goal of considering a simplified model definiteness of H allows us to define the vector norm k:kH 2 T is to analyze the additional effects purely due to BN on GD, by kxkH = x Hx. it is desirable to perform our analysis in this regime. The ordinary least squares (OLS) method for estimating the In Arora et al.(2019), the authors proved a general con- unknown parameters w leads to the following optimization vergence result for BNGD of O(k−1=2) in terms of the problem, gradient norm for objectives with Lipschitz continuous gra- 1 T 2 min J0(w) : = Ex;y[(y − x w) ] (2) d 2 dients. This matches the best result for gradient descent w2R on general non-convex functions with learning rate tuning = c − wT g + 1 wT Hw; (Carmon et al., 2017). In contrast, our convergence result 2 2 is in iteration and is shown to be linear under mild condi- which has unique minimizer w = u := H−1g. A Quantitative Analysis of the Effect of Batch Normalization on Gradient Descent The gradient descent (GD) method (with step size or learn- reasons which will become clear in the subsequent analysis. ing rate ") for solving the optimization problem (2) is given We thus have the following discrete-time dynamical system: by the iteration T wk g ak+1 = ak + "a − ak ; (7) σk wk+1 = wk − "rwJ0(wk) = (I − "H)wk + "g; (3) T ak wk g wk+1 = wk + " σ g − 2 Hwk : (8) 2 k σk which converges if 0 < " < =: "max, and the con- λmax vergence rate is determined by the spectral radius ρ" := To simplify subsequent notation, we denote by H∗ the ma- ρ(I − "H) = maxifj1 − ελi(H)jg with trix ∗ T ku − wk+1k ≤ ρ(I − "H)ku − wkk: (4) Huu H H := H − uT Hu ; (9) It is well-known (e.g.
Recommended publications
  • Batch Normalization
    Deep Learning Srihari Batch Normalization Sargur N. Srihari [email protected] 1 Deep Learning Srihari Topics in Optimization for Deep Models • Importance of Optimization in machine learning • How learning differs from optimization • Challenges in neural network optimization • Basic Optimization Algorithms • Parameter initialization strategies • Algorithms with adaptive learning rates • Approximate second-order methods • Optimization strategies and meta-algorithms 2 Deep Learning Srihari Topics in Optimization Strategies and Meta-Algorithms 1. Batch Normalization 2. Coordinate Descent 3. Polyak Averaging 4. Supervised Pretraining 5. Designing Models to Aid Optimization 6. Continuation Methods and Curriculum Learning 3 Deep Learning Srihari Overview of Optimization Strategies • Many optimization techniques are general templates that can be specialized to yield algorithms • They can be incorporated into different algorithms 4 Deep Learning Srihari Topics in Batch Normalization • Batch normalization: exciting recent innovation • Motivation is difficulty of choosing learning rate ε in deep networks • Method is to replace activations with zero-mean with unit variance activations 5 Deep Learning Srihari Adding normalization between layers • Motivated by difficulty of training deep models • Method adds an additional step between layers, in which the output of the earlier layer is normalized – By standardizing the mean and standard deviation of each individual unit • It is a method of adaptive re-parameterization – It is not an optimization
    [Show full text]
  • On Self Modulation for Generative Adver- Sarial Networks
    Published as a conference paper at ICLR 2019 ON SELF MODULATION FOR GENERATIVE ADVER- SARIAL NETWORKS Ting Chen∗ Mario Lucic, Neil Houlsby, Sylvain Gelly University of California, Los Angeles Google Brain [email protected] flucic,neilhoulsby,[email protected] ABSTRACT Training Generative Adversarial Networks (GANs) is notoriously challenging. We propose and study an architectural modification, self-modulation, which im- proves GAN performance across different data sets, architectures, losses, regu- larizers, and hyperparameter settings. Intuitively, self-modulation allows the in- termediate feature maps of a generator to change as a function of the input noise vector. While reminiscent of other conditioning techniques, it requires no labeled data. In a large-scale empirical study we observe a relative decrease of 5% − 35% in FID. Furthermore, all else being equal, adding this modification to the generator leads to improved performance in 124=144 (86%) of the studied settings. Self- modulation is a simple architectural change that requires no additional parameter tuning, which suggests that it can be applied readily to any GAN.1 1 INTRODUCTION Generative Adversarial Networks (GANs) are a powerful class of generative models successfully applied to a variety of tasks such as image generation (Zhang et al., 2017; Miyato et al., 2018; Karras et al., 2017), learned compression (Tschannen et al., 2018), super-resolution (Ledig et al., 2017), inpainting (Pathak et al., 2016), and domain transfer (Isola et al., 2016; Zhu et al., 2017). Training GANs is a notoriously challenging task (Goodfellow et al., 2014; Arjovsky et al., 2017; Lucic et al., 2018) as one is searching in a high-dimensional parameter space for a Nash equilibrium of a non-convex game.
    [Show full text]
  • A Regularization Study for Policy Gradient Methods
    Submitted by Florian Henkel Submitted at Institute of Computational Perception Supervisor Univ.-Prof. Dr. Gerhard Widmer Co-Supervisor A Regularization Study Dipl.-Ing. Matthias Dorfer for Policy Gradient July 2018 Methods Master Thesis to obtain the academic degree of Diplom-Ingenieur in the Master’s Program Computer Science JOHANNES KEPLER UNIVERSITY LINZ Altenbergerstraße 69 4040 Linz, Österreich www.jku.at DVR 0093696 Abstract Regularization is an important concept in the context of supervised machine learning. Especially with neural networks it is necessary to restrict their ca- pacity and expressivity in order to avoid overfitting to given train data. While there are several well-known and widely used regularization techniques for supervised machine learning such as L2-Normalization, Dropout or Batch- Normalization, their effect in the context of reinforcement learning is not yet investigated. In this thesis we give an overview of regularization in combination with policy gradient methods, a subclass of reinforcement learning algorithms relying on neural networks. We compare different state-of-the-art algorithms together with regularization methods for supervised learning to get a better understanding on how we can improve generalization in reinforcement learn- ing. The main motivation for exploring this line of research is our current work on score following, where we try to train reinforcement learning agents to listen to and read music. These agents should learn from given musical training pieces to follow music they have never heard and seen before. Thus, the agents have to generalize which is why this scenario is a suitable test bed for investigating generalization in the context of reinforcement learning.
    [Show full text]
  • Entropy-Based Aggregate Posterior Alignment Techniques for Deterministic Autoencoders and Implications for Adversarial Examples
    Entropy-based aggregate posterior alignment techniques for deterministic autoencoders and implications for adversarial examples by Amur Ghose A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Mathematics in Computer Science Waterloo, Ontario, Canada, 2020 c Amur Ghose 2020 Author's Declaration This thesis consists of material all of which I authored or co-authored: see Statement of Contributions included in the thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii Statement of Contributions Chapters 1 and 3 consist of unpublished work solely written by myself, with proof- reading and editing suggestions from my supervisor, Pascal Poupart. Chapter 2 is (with very minor changes) an UAI 2020 paper (paper link) on which I was the lead author, wrote the manuscript, formulated the core idea and ran the majority of the experiments. Some of the experiments were ran by Abdullah Rashwan, a co-author on the paper (credited in the acknowledgements of the thesis). My supervisor Pascal Poupart again proofread and edited the manuscript and made many valuable suggestions. Note that UAI 2020 proceedings are Open Access under Creative Commons, and as such, no copyright section is provided with the thesis, and as the official proceedings are as of yet unreleased a link has been provided in lieu of a citation. iii Abstract We present results obtained in the context of generative neural models | specifically autoencoders | utilizing standard results from coding theory.
    [Show full text]
  • Arxiv:1805.10694V3 [Stat.ML] 6 Oct 2018 Ing Algorithm for These Settings
    Exponential convergence rates for Batch Normalization: The power of length-direction decoupling in non-convex optimization Jonas Kohler* Hadi Daneshmand* Aurelien Lucchi Thomas Hofmann ETH Zurich ETH Zurich ETH Zurich ETH Zurich Ming Zhou Klaus Neymeyr Universit¨atRostock Universit¨atRostock Abstract (Bn) (Ioffe and Szegedy, 2015). This technique has been proven to successfully stabilize and accelerate Normalization techniques such as Batch Nor- training of deep neural networks and is thus by now malization have been applied successfully for standard in many state-of-the art architectures such training deep neural networks. Yet, despite as ResNets (He et al., 2016) and the latest Inception its apparent empirical benefits, the reasons Nets (Szegedy et al., 2017). The success of Batch Nor- behind the success of Batch Normalization malization has promoted its key idea that normalizing are mostly hypothetical. We here aim to pro- the inner layers of a neural network stabilizes train- vide a more thorough theoretical understand- ing which recently led to the development of many ing from a classical optimization perspective. such normalization methods such as (Arpit et al., 2016; Our main contribution towards this goal is Klambauer et al., 2017; Salimans and Kingma, 2016) the identification of various problem instances and (Ba et al., 2016) to name just a few. in the realm of machine learning where Batch Yet, despite the ever more important role of Batch Normalization can provably accelerate opti- Normalization for training deep neural networks, the mization. We argue that this acceleration Machine Learning community is mostly relying on em- is due to the fact that Batch Normalization pirical evidence and thus lacking a thorough theoretical splits the optimization task into optimizing understanding that can explain such success.
    [Show full text]
  • Norm Matters: Efficient and Accurate Normalization Schemes in Deep Networks
    Norm matters: efficient and accurate normalization schemes in deep networks Elad Hoffer1,∗ Ron Banner2,∗ Itay Golan1,∗ Daniel Soudry1 {elad.hoffer, itaygolan, daniel.soudry}@gmail.com {ron.banner}@intel.com (1) Technion - Israel Institute of Technology, Haifa, Israel (2) Intel - Artificial Intelligence Products Group (AIPG) Abstract Over the past few years, Batch-Normalization has been commonly used in deep networks, allowing faster training and high performance for a wide variety of applications. However, the reasons behind its merits remained unanswered, with several shortcomings that hindered its use for certain tasks. In this work, we present a novel view on the purpose and function of normalization methods and weight- decay, as tools to decouple weights’ norm from the underlying optimized objective. This property highlights the connection between practices such as normalization, weight decay and learning-rate adjustments. We suggest several alternatives to the widely used L2 batch-norm, using normalization in L1 and L1 spaces that can substantially improve numerical stability in low-precision implementations as well as provide computational and memory benefits. We demonstrate that such methods enable the first batch-norm alternative to work for half-precision implementations. Finally, we suggest a modification to weight-normalization, which improves its performance on large-scale tasks. 2 1 Introduction Deep neural networks are known to benefit from normalization between consecutive layers. This was made noticeable with the introduction of Batch-Normalization (BN) [19], which normalizes the output of each layer to have zero mean and unit variance for each channel across the training batch. This idea was later developed to act across channels instead of the batch dimension in Layer- normalization [2] and improved in certain tasks with methods such as Batch-Renormalization [18], arXiv:1803.01814v3 [stat.ML] 7 Feb 2019 Instance-normalization [35] and Group-Normalization [40].
    [Show full text]
  • A Batch Normalized Inference Network Keeps the KL Vanishing Away
    A Batch Normalized Inference Network Keeps the KL Vanishing Away Qile Zhu1, Wei Bi2, Xiaojiang Liu2, Xiyao Ma1, Xiaolin Li3 and Dapeng Wu1 1University of Florida, 2Tencent AI Lab, 3AI Institute, Tongdun Technology valder,maxiy,dpwu @ufl.edu victoriabi,kieranliuf [email protected] f [email protected] Abstract inference, VAE first samples the latent variable from the prior distribution and then feeds it into Variational Autoencoder (VAE) is widely used the decoder to generate an instance. VAE has been as a generative model to approximate a successfully applied in many NLP tasks, including model’s posterior on latent variables by com- bining the amortized variational inference and topic modeling (Srivastava and Sutton, 2017; Miao deep neural networks. However, when paired et al., 2016; Zhu et al., 2018), language modeling with strong autoregressive decoders, VAE of- (Bowman et al., 2016), text generation (Zhao et al., ten converges to a degenerated local optimum 2017b) and text classification (Xu et al., 2017). known as “posterior collapse”. Previous ap- An autoregressive decoder (e.g., a recurrent neu- proaches consider the Kullback–Leibler diver- ral network) is a common choice to model the gence (KL) individual for each datapoint. We text data. However, when paired with strong au- propose to let the KL follow a distribution toregressive decoders such as LSTMs (Hochreiter across the whole dataset, and analyze that it is sufficient to prevent posterior collapse by keep- and Schmidhuber, 1997) and trained under conven- ing the expectation of the KL’s distribution tional training strategy, VAE suffers from a well- positive. Then we propose Batch Normalized- known problem named the posterior collapse or VAE (BN-VAE), a simple but effective ap- the KL vanishing problem.
    [Show full text]
  • Predicting Uber Demand in NYC with Wavenet
    SMART ACCESSIBILITY 2019 : The Fourth International Conference on Universal Accessibility in the Internet of Things and Smart Environments Predicting Uber Demand in NYC with Wavenet Long Chen Konstantinos Ampountolas Piyushimita (Vonu) Thakuriah Urban Big Data Center School of Engineering Rutgers University Glasgow, UK University of Glasgow, UK New Brunswick, NJ, USA Email: [email protected] Email: [email protected] Email: [email protected] Abstract—Uber demand prediction is at the core of intelligent transportation systems when developing a smart city. However, exploiting uber real time data to facilitate the demand predic- tion is a thorny problem since user demand usually unevenly distributed over time and space. We develop a Wavenet-based model to predict Uber demand on an hourly basis. In this paper, we present a multi-level Wavenet framework which is a one-dimensional convolutional neural network that includes two sub-networks which encode the source series and decode the predicting series, respectively. The two sub-networks are combined by stacking the decoder on top of the encoder, which in turn, preserves the temporal patterns of the time series. Experiments on large-scale real Uber demand dataset of NYC demonstrate that our model is highly competitive to the existing ones. Figure 1. The structure of WaveNet, where different colours in embedding Keywords–Anything; Something; Everything else. input denote 2 × k, 3 × k, and 4 × k convolutional filter respectively. I. INTRODUCTION With the proliferation of Web 2.0, ride sharing applications, such as Uber, have become a popular way to search nearby at an hourly basis, which is a WaveNet-based neural network sharing rides.
    [Show full text]
  • Large Memory Layers with Product Keys
    Large Memory Layers with Product Keys Guillaume Lample∗y, Alexandre Sablayrolles∗, Marc’Aurelio Ranzato∗, Ludovic Denoyer∗y, Herve´ Jegou´ ∗ fglample,asablayrolles,ranzato,denoyer,[email protected] Abstract This paper introduces a structured memory which can be easily integrated into a neural network. The memory is very large by design and significantly increases the capacity of the architecture, by up to a billion parameters with a negligi- ble computational overhead. Its design and access pattern is based on product keys, which enable fast and exact nearest neighbor search. The ability to increase the number of parameters while keeping the same computational budget lets the overall system strike a better trade-off between prediction accuracy and compu- tation efficiency both at training and test time. This memory layer allows us to tackle very large scale language modeling tasks. In our experiments we consider a dataset with up to 30 billion words, and we plug our memory layer in a state- of-the-art transformer-based architecture. In particular, we found that a memory augmented model with only 12 layers outperforms a baseline transformer model with 24 layers, while being twice faster at inference time. We release our code for reproducibility purposes.3 1 Introduction Neural networks are commonly employed to address many complex tasks such as machine trans- lation [43], image classification [27] or speech recognition [16]. As more and more data becomes available for training, these networks are increasingly larger [19]. For instance, recent models both in vision [29] and in natural language processing [20, 36, 28] have more than a billion parame- ters.
    [Show full text]
  • Approach Pre-Trained Deep Learning Models with Caution Pre-Trained Models Are Easy to Use, but Are You Glossing Over Details That Could Impact Your Model Performance?
    Approach pre-trained deep learning models with caution Pre-trained models are easy to use, but are you glossing over details that could impact your model performance? Cecelia Shao Follow Apr 15 · 5 min read . How many times have you run the following snippets: import torchvision.models as models inception = models.inception_v3(pretrained=True) or from keras.applications.inception_v3 import InceptionV3 base_model = InceptionV3(weights='imagenet', include_top=False) It seems like using these pre-trained models have become a new standard for industry best practices. After all, why wouldn’t you take advantage of a model that’s been trained on more data and compute than you could ever muster by yourself? See the discussion on Reddit and HackerNews Long live pre-trained models! There are several substantial benefits to leveraging pre-trained models: • super simple to incorporate • achieve solid (same or even better) model performance quickly • there’s not as much labeled data required • versatile uses cases from transfer learning, prediction, and feature extraction Advances within the NLP space have also encouraged the use of pre- trained language models like GPT and GPT-2, AllenNLP’s ELMo, Google’s BERT, and Sebastian Ruder and Jeremy Howard’s ULMFiT (for an excellent over of these models, see this TOPBOTs post). One common technique for leveraging pretrained models is feature extraction, where you’re retrieving intermediate representations produced by the pretrained model and using those representations as inputs for a new model. These final fully-connected layers are generally assumed to capture information that is relevant for solving a new task. Everyone’s in on the game Every major framework like Tensorflow, Keras, PyTorch, MXNet, etc… offers pre-trained models like Inception V3, ResNet, AlexNet with weights: • Keras Applications • PyTorch torchvision.models • Tensorflow Official Models (and now TensorFlow Hubs) • MXNet Model Zoo • Fast.ai Applications Easy, right? .
    [Show full text]
  • Outline for Today's Presentation
    Outline for today’s presentation • We will see how RNNs and CNNs compare on variety of tasks • Then, we will go through a new approach for Sequence Modelling that has become state of the art • Finally, we will look at a few augmented RNN models RNNs vs CNNs Empirical Evaluation of Generic Networks for Sequence Modelling Let’s say you are given a sequence modelling task of text classification / music note prediction, and you are asked to develop a simple model. What would your baseline model be based on- RNNs or CNNs? Recent Trend in Sequence Modelling • Widely considered as RNNs “home turf” • Recent research has shown otherwise – • Speech Synthesis – WaveNet uses Dilated Convolutions for Synthesis • Char-to-Char Machine Translation – ByteNet uses Encoder-Decoder architecture and Dilated Convolutions. Tested on English-German dataset • Word-to-Word Machine Translation – Hybrid CNN-LSTM on English-Romanian and English-French datasets • Character-level Language Modelling – ByteNet on WikiText dataset • Word-level Language Modelling – Gated CNNs on WikiText dataset Temporal Convolutional Network (TCN) • Model that uses best practices in Convolutional network design • The properties of TCN - • Causal – there is no information leakage from future to past • Memory – It can look very far into the past for prediction/synthesis • Input - It can take any arbitrary length sequence with proper tuning to the particular task. • Simple – It uses no gating mechanism, no complex stacking mechanism and each layer output has the same length as the input •
    [Show full text]
  • Rethinking the Usage of Batch Normalization and Dropout in the Training of Deep Neural Networks
    Rethinking the Usage of Batch Normalization and Dropout in the Training of Deep Neural Networks Guangyong Chen * 1 Pengfei Chen * 2 1 Yujun Shi 3 Chang-Yu Hsieh 1 Benben Liao 1 Shengyu Zhang 2 1 Abstract sive performance. The state-of-the-art neural networks are In this work, we propose a novel technique to often complex structures comprising hundreds of layers of boost training efficiency of a neural network. Our neurons and millions of parameters. Efficient training of a work is based on an excellent idea that whitening modern DNN is often complicated by the need of feeding the inputs of neural networks can achieve a fast such a behemoth with millions of data entries. Developing convergence speed. Given the well-known fact novel techniques to increase training efficiency of DNNs is that independent components must be whitened, a highly active research topics. In this work, we propose a we introduce a novel Independent-Component novel training technique by combining two commonly used (IC) layer before each weight layer, whose in- ones, Batch Normalization (BatchNorm) (Ioffe & Szegedy, puts would be made more independent. However, 2015) and Dropout (Srivastava et al., 2014), for a purpose determining independent components is a compu- (making independent inputs to neural networks) that is not tationally intensive task. To overcome this chal- possibly achieved by either technique alone. This marriage lenge, we propose to implement an IC layer by of techniques endows a new perspective on how Dropout combining two popular techniques, Batch Nor- could be used for training DNNs and achieve the original malization and Dropout, in a new manner that we goal of whitening inputs of every layer (Le Cun et al., 1991; can rigorously prove that Dropout can quadrati- Ioffe & Szegedy, 2015) that inspired the BatchNorm work cally reduce the mutual information and linearly (but did not succeed).
    [Show full text]