Regularizing Deep Neural Networks by Enhancing Diversity in Feature Extraction Babajide O

Regularizing Deep Neural Networks by Enhancing Diversity in Feature Extraction Babajide O

1 Regularizing Deep Neural Networks by Enhancing Diversity in Feature Extraction Babajide O. Ayinde, Student Member, IEEE, Tamer Inanc, Senior Member, IEEE, and Jacek M. Zurada, Life Fellow, IEEE Abstract—This paper proposes a new and efficient technique and output of deep network using initialization has shown to to regularize neural network in the context of deep learning preserve signal propagation and improve generalization [2], using correlations among features. Previous studies have shown [12]. Orthonormal initialization coupled with output variance that over-sized deep neural network models tend to produce a lot of redundant features that are either shifted version of one normalization has also been shown as decorrelating neural another or are very similar and show little or no variations; thus network’s initial weights for better convergence [13]. resulting in redundant filtering. We propose a way to address Another important and popular paradigm for reducing over- this problem and show that such redundancy can be avoided fitting is regularization. In general, the two most commonly using regularization and adaptive feature dropout mechanism. used regularization paradigms utilize the hidden activations, We show that regularizing both negative and correlated features according to their differentiation and based on their relative weights or output distribution. The first family of regulariza- cosine distances yields network extracting dissimilar features, tion strategy aims to extenuate the model complexity by using with less overfitting, and better generalization. The concept weight decay [14], [15] to reduce the number of effective is illustrated with deep multilayer perceptron, convolutional model parameters, or using dropout [1] to randomly drop neural network, Gated Recurrent Unit (GRU), and Long short- hidden activations, or using DropConnect [16] to randomly term memory (LSTM) on MNIST digits recognition, CIFAR-10, ImageNet, and Stanford Natural Language Inference data sets. drop weights during training. Even though these methods have shown improvements on generalization, they regularize in a Index Terms—Deep learning, feature correlation, feature clus- manner that under-utilizes the capacity of the model. The tering, regularization, cosine similarity, redundancy elimination. second family of regularization methods focuses on improving the generalization without undermining the model’s capacity. For instance, [17] presented pre-training algorithms to learn I. INTRODUCTION decorrelated features and [18] discusses decorrelated activa- HE expressiveness of deep neural networks, usually with tions using incoherent training. T huge number of trainable parameters, sometimes comes Other mechanisms that also fall in the second category are at a disadvantage when trained on limited amount of data those that regularize the output distribution. In this sense, due to their susceptibility to overfitting. To circumvent this entropy-based regularizer with deterministic annealing was problem, a plethora of regularization and initialization methods applied to train multilayer perceptrons for the purpose of such as weight decay, dropout [1], and weight initialization [2] avoiding poor initialization, local minima, and for improving have been purported to ameliorate overfitting and convergence model generalization [19]. Regularization has also been ap- problems resulting from data scarcity and network size [3]. plied in form of label smoothing for estimating the marginal- Moreover, recent advances in deep learning for image classifi- ized effect of label-dropout during training. This, in effect, cation [4], [5], language processing [6], [7], speech synthesis reduces overfitting by restricting the model from assigning and recognition [8–10] have been attributed to efficient reg- full probability to each training sample and maintaining a ularization of randomly initialized deep and complex models reasonable ratio between the logits of the incorrect classes trained with Stochastic Gradient Descent (SGD). [20]. Label smoothing has also been achieved using a teacher Over the last few decades, research focused on strategies model [21] instead of smoothing with uniform distribution for reducing overfitting and improving the capabilities of deep as in [20]. Injecting label noise has equally been shown to neural network. Examples of such strategies include Batch have a tremendous regularizing effect [22]. Moreover, models Normalization [11] that aims to minimize the internal covari- with high level of overfitting have recently been shown to ance shift. Also, keeping similar variance at each layer’s input assign all output probability to a single class in the training set, thus giving rise to output distributions with low entropy - a B. O. Ayinde is with the Department of Electrical and Computer En- gineering, University of Louisville, Louisville, KY, 40292 USA (e-mail: phenomenon referred to as overconfidence [20]. Methods for [email protected]). effective regularization of overconfident networks have also T. Inanc is with the Department of Electrical and Computer Engi- been reported that penalize the confident output distribution neering, University of Louisville, Louisville, KY, 40292 USA, (e-mail: [email protected]). [23]. J. M. Zurada is with the Department of Electrical and Computer Engineer- Our observations and those of other researchers [3], [24], ing, University of Louisville, Louisville, KY, 40292 USA, and also with the [25] indicate that over-sized deep neural networks are typically Information Technology Institute, University of Social Science,Łodz´ 90-113, Poland (Corresponding author, e-mail: [email protected]). prone to high level of overfitting and usually rely on many This work was supported by the NSF under grant 1641042. redundant features that can be either shifted version of each 2 other or be very similar with little or no variations. For similar or shifted version of one another. A symptom of learn- instance, this redundancy is evidently pronounced in features ing replicated or similar features is that two or more processing learned by popular deep neural network architecture such as units extract very similar and correlated information. From an AlexNet [26] as emphasized in [3], [27]. To address this information theory standpoint, similar or shifted versions of redundancy problem, layers of deep and/or wide architectures filters do not add extra information to the feature hierarchy, and have to be trained under specific and well-defined sets of therefore should be possibly suppressed. In other words, the constraints in order to remove this limitation during training. activation of one unit should not be predictable based on the The most closely related work to ours is the recently activations of other units of the same layer. Enforcing feature introduced regularization technique known as OrthoReg [3] dissimilarity in traditional way can be generally involved and that locally enforces feature orthogonality by removing inter- would require computation of huge joint probability table and ference between negatively correlated features. The key idea batch statistics which can be computationally intractable [3]. addressed is to regularize positively correlated features during One tractable way of computing correlation between two training. In effect, OrthoReg reduces overfitting by enforcing features is by evaluating the cosine similarity measure higher decorrelation bounds on features. Our algorithms, on (SIMC ) between them: the other hand, aim at regularizing both negatively and posi- < w1; w2 > tively correlated features according to their differentiation and SIMC (w1; w2) = (1) k w1 kk w2 k based on their relative cosine distances. This way we only penalize features that are correlated above a certain correla- where < w1; w2 > is the inner product of arbitrary feature tion threshold, thus strengthening feature diversity as training vectors w1 and w2. The similarity between two feature vectors progresses. This approach affords the flexibility of choosing corresponds to correlation between them, that is, the cosine of the absolute correlation bound. Hence, the proposed method the angle between the feature vectors in the feature space. leads to elimination of redundancy and better generalization. Since the entries of the vectors can take both negative and Other related work is [28], which aims at training neu- positive values, SIMC is bounded by [-1,1]. It is 1 when ral networks for classification with few training samples by w1=w2 or when w1 and w2 are identical. SIMC is -1 when constraining the hidden units to learn class-wise invariant the two vectors are in exact opposite direction. The two filter features, with samples of the same class having the same vectors are orthogonal in feature space when SIMC is 0. The feature representation. It is remarked that our methods have corresponding distance measure is given as DC = 1−SIMC . the flavor of the two aforementioned families of regularization in the sense that we aim to improve generalization without A. Diversity Regularization undermining the model’s capacity by bounding the pairwise In order to minimize the extraction of redundant features correlation of features and at the same time temporarily drop during training, it is necessary to maximize the information redundant feature maps during training. encoded by each processing hidden units by incorporating a The problem addressed here is four-fold: (i) we propose an penalty term into the overall learning objective,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us