Deep Learning 6.4. Batch Normalization

Deep Learning 6.4. Batch Normalization

Deep learning 6.4. Batch normalization Fran¸coisFleuret https://fleuret.org/dlc/ It was the main motivation behind Xavier's weight initialization rule. A different approach consists of explicitly forcing the activation statistics during the forward pass by re-normalizing them. Batch normalization proposed by Ioffe and Szegedy (2015) was the first method introducing this idea. We saw that maintaining proper statistics of the activations and derivatives was a critical issue to allow the training of deep architectures. Fran¸coisFleuret Deep learning / 6.4. Batch normalization 1 / 16 A different approach consists of explicitly forcing the activation statistics during the forward pass by re-normalizing them. Batch normalization proposed by Ioffe and Szegedy (2015) was the first method introducing this idea. We saw that maintaining proper statistics of the activations and derivatives was a critical issue to allow the training of deep architectures. It was the main motivation behind Xavier's weight initialization rule. Fran¸coisFleuret Deep learning / 6.4. Batch normalization 1 / 16 We saw that maintaining proper statistics of the activations and derivatives was a critical issue to allow the training of deep architectures. It was the main motivation behind Xavier's weight initialization rule. A different approach consists of explicitly forcing the activation statistics during the forward pass by re-normalizing them. Batch normalization proposed by Ioffe and Szegedy (2015) was the first method introducing this idea. Fran¸coisFleuret Deep learning / 6.4. Batch normalization 1 / 16 Batch normalization can be done anywhere in a deep architecture, and forces the activations' first and second order moments, so that the following layers do not need to adapt to their drift. \Training Deep Neural Networks is complicated by the fact that the distri- bution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization /.../" (Ioffe and Szegedy, 2015) Fran¸coisFleuret Deep learning / 6.4. Batch normalization 2 / 16 \Training Deep Neural Networks is complicated by the fact that the distri- bution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization /.../" (Ioffe and Szegedy, 2015) Batch normalization can be done anywhere in a deep architecture, and forces the activations' first and second order moments, so that the following layers do not need to adapt to their drift. Fran¸coisFleuret Deep learning / 6.4. Batch normalization 2 / 16 During test, it simply shifts and rescales according to the empirical moments estimated during training. During training batch normalization shifts and rescales according to the mean and variance estimated on the batch. Processing a batch jointly is unusual. Operations used in deep models B can virtually always be formalized per-sample. Fran¸coisFleuret Deep learning / 6.4. Batch normalization 3 / 16 During training batch normalization shifts and rescales according to the mean and variance estimated on the batch. Processing a batch jointly is unusual. Operations used in deep models B can virtually always be formalized per-sample. During test, it simply shifts and rescales according to the empirical moments estimated during training. Fran¸coisFleuret Deep learning / 6.4. Batch normalization 3 / 16 D D from which we compute normalized zb 2 R , and outputs yb 2 R xb − m^batch 8b = 1;:::; B; zb = p v^batch + yb = γ zb + β: D D where is the Hadamard component-wise product, and γ 2 R and β 2 R are parameters to optimize. D If xb 2 R ; b = 1;:::; B are the samples in the batch, we first compute the empirical per-component mean and variance on the batch B 1 X m^ = x batch B b b=1 B 1 X v^ = (x − m^ )2 batch B b batch b=1 Fran¸coisFleuret Deep learning / 6.4. Batch normalization 4 / 16 D If xb 2 R ; b = 1;:::; B are the samples in the batch, we first compute the empirical per-component mean and variance on the batch B 1 X m^ = x batch B b b=1 B 1 X v^ = (x − m^ )2 batch B b batch b=1 D D from which we compute normalized zb 2 R , and outputs yb 2 R xb − m^batch 8b = 1;:::; B; zb = p v^batch + yb = γ zb + β: D D where is the Hadamard component-wise product, and γ 2 R and β 2 R are parameters to optimize. Fran¸coisFleuret Deep learning / 6.4. Batch normalization 4 / 16 B As for dropout, the model behaves differently during train and test. During inference, batch normalization shifts and rescales independently each component of the input x according to statistics estimated during training: x − m^ y = γ p + β: v^ + Hence, during inference, batch normalization performs a component-wise affine transformation, and it can process samples individually. Fran¸coisFleuret Deep learning / 6.4. Batch normalization 5 / 16 During inference, batch normalization shifts and rescales independently each component of the input x according to statistics estimated during training: x − m^ y = γ p + β: v^ + Hence, during inference, batch normalization performs a component-wise affine transformation, and it can process samples individually. B As for dropout, the model behaves differently during train and test. Fran¸coisFleuret Deep learning / 6.4. Batch normalization 5 / 16 As dropout, batch normalization is implemented as separate modules that process input components independently. >>> bn = nn.BatchNorm1d(3) >>> with torch.no_grad(): ... bn.bias.copy_(torch.tensor([2., 4., 8.])) ... bn.weight.copy_(torch.tensor([1., 2., 3.])) ... Parameter containing: tensor([2., 4., 8.], requires_grad=True) Parameter containing: tensor([1., 2., 3.], requires_grad=True) >>> x = torch.empty(1000, 3).normal_() >>> x = x * torch.tensor([2., 5., 10.]) + torch.tensor([-10., 25., 3.]) >>> x.mean(0) tensor([-9.9669, 25.0213, 2.4361]) >>> x.std(0) tensor([1.9063, 5.0764, 9.7474]) >>> y = bn(x) >>> y.mean(0) tensor([2.0000, 4.0000, 8.0000], grad_fn=<MeanBackward2>) >>> y.std(0) tensor([1.0005, 2.0010, 3.0015], grad_fn=<StdBackward1>) Fran¸coisFleuret Deep learning / 6.4. Batch normalization 6 / 16 As for any other module, we have to compute the derivatives of the loss L with respect to the inputs values and the parameters. For clarity, since components are processed independently, in what follows we consider a single dimension and do not index it. Fran¸coisFleuret Deep learning / 6.4. Batch normalization 7 / 16 We have B 1 X m^ = x batch B b b=1 B 1 X v^ = (x − m^ )2 batch B b batch b=1 xb − m^batch 8b = 1;:::; B; zb = p v^batch + yb = γzb + β: From which @L X @L @yb X @L = = z @γ @y @γ @y b b b b b @L X @L @yb X @L = = : @β @y @β @y b b b b Fran¸coisFleuret Deep learning / 6.4. Batch normalization 8 / 16 @L @L 8b = 1;:::; B; = γ @zb @yb B @L 1 −3=2 X @L = − (^vbatch + ) (xb − m^batch) @v^ 2 @z batch b=1 b B @L 1 X @L = − p @m^ v^ + @z batch batch b=1 b @L @L 1 2 @L 1 @L 8b = 1;:::; B; = p + (xb − m^batch) + @xb @zb v^batch + B @v^batch B @m^batch In standard implementations,^m and^v for test are estimated with a moving average during train, so that it can be implemented as a module which does not need an additional pass through the samples for training. Since every sample in the batch impacts the moment estimates, it impacts all batch's outputs, and the derivative of the loss with respect to an input is complicated. Fran¸coisFleuret Deep learning / 6.4. Batch normalization 9 / 16 B @L 1 −3=2 X @L = − (^vbatch + ) (xb − m^batch) @v^ 2 @z batch b=1 b B @L 1 X @L = − p @m^ v^ + @z batch batch b=1 b @L @L 1 2 @L 1 @L 8b = 1;:::; B; = p + (xb − m^batch) + @xb @zb v^batch + B @v^batch B @m^batch In standard implementations,^m and^v for test are estimated with a moving average during train, so that it can be implemented as a module which does not need an additional pass through the samples for training. Since every sample in the batch impacts the moment estimates, it impacts all batch's outputs, and the derivative of the loss with respect to an input is complicated. @L @L 8b = 1;:::; B; = γ @zb @yb Fran¸coisFleuret Deep learning / 6.4. Batch normalization 9 / 16 @L @L 1 2 @L 1 @L 8b = 1;:::; B; = p + (xb − m^batch) + @xb @zb v^batch + B @v^batch B @m^batch In standard implementations,^m and^v for test are estimated with a moving average during train, so that it can be implemented as a module which does not need an additional pass through the samples for training. Since every sample in the batch impacts the moment estimates, it impacts all batch's outputs, and the derivative of the loss with respect to an input is complicated. @L @L 8b = 1;:::; B; = γ @zb @yb B @L 1 −3=2 X @L = − (^vbatch + ) (xb − m^batch) @v^ 2 @z batch b=1 b B @L 1 X @L = − p @m^ v^ + @z batch batch b=1 b Fran¸coisFleuret Deep learning / 6.4. Batch normalization 9 / 16 In standard implementations,^m and^v for test are estimated with a moving average during train, so that it can be implemented as a module which does not need an additional pass through the samples for training.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    32 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us