On this chapter we will learn about the batch norm layer. Previously we said that feature scaling make the job of the gradient descent easier. Now we will extend this idea and normalize the activation of every Fully Connected layer or Convolution layer during training. This also means that while we're training we will select an batch calculate it's mean and standard deviation.
You can think that the batch-norm will be some kind of adaptive (or learnable) pre-processing block with trainable parameters. Which also means that we need to back-propagate them.
Here is the list of advantages of using Batch-Norm:
Improves gradient flow, used on very deep models (Resnet need this)
Allow higher learning rates
Reduce dependency on initialization
Gives some kind of regularization (Even make Dropout less important but keep using it)
As a rule of thumb if you use Dropout+BatchNorm you don't need L2 regularization
It basically force your activations (Conv,FC ouputs) to be unit standard deviation and zero mean.
To each learning batch of data we apply the following normalization.
x^(k)=VAR[x(k)]x(k)−E[x(k)]
The output of the batch norm layer, has the γ,β are parameters. Those parameters will be learned to best represent your activations. Those parameters allows a learnable (scale and shift) factor
yk=γk.x^(k)+βk
Now summarizing the operations:
Here, ϵ is a small number, 1e-5.
Where to use the Batch-Norm layer
The batch norm layer is used after linear layers (ie: FC, conv), and before the non-linear layers (relu).
There is actually 2 batch norm implementations one for FC layer and the other for conv layers (Spatial batch-norm). The good news is that the Spatial batch norm just calls the normal batch-norm after some reshapes.
Test time
At prediction time that batch norm works differently.
The mean/std are not computed based on the batch. Instead, we need to build a estimate during training of the mean/std of the whole dataset(population) for each batch norm layer on your model.
One approach to estimating the population mean and variance during training is to use an exponential moving average.
St=α.St−1+(1−α).Yt
Where:
St,St−1: Current and previous estimation
(α): Represents the degree of weighting decrease, a constant smoothing factor between 0 and 1
Yt: Current value (could be mean or std) that we're trying to estimate
Normally when we implement this layer we have some kind of flag that detects if we're on training or testing.
As mentioned earlier we need to know how to backpropagate on the batch-norm layer, first as we did with other layers we need to create the computation graph. After this step we need to calculate the derivative of each node with respect to it's inputs.
Computation Graph
In order to find the partial derivatives on back-propagation is better to visualize the algorithm as a computation graph:
New nodes
By inspecting this graph we have some new nodes (N1.i=1∑NX(i), x2, (x−ϵ), x1). To simplify things you can use Wolfram alpha to find the derivatives. For backpropagate other nodes refer to the Back-propagation chapter
Where:
$$x_{cache}$$ means the cached (or saved) input from the forward propagation.
$$dout$$ means the previous block gradient
#### [Block sqrt(x-epsilon)](https://www.wolframalpha.com/input/?i=derivative+of+sqrt(x-epsilon)
![](image_folder_8/BlockBackprop_sqrt_x.png)
In other words:
$$\Large\frac{\partial(\sqrt{x-\epsilon})}{\partial x}=\frac{1}{2 \sqrt{(x_{cache}-\epsilon)}} \therefore dx=[\frac{1}{2 \sqrt{(x_{cache}-\epsilon)}}].dout
Where:
xcache: the cached (or saved) input from the forward propagation.
dout: the previous block gradient
ϵ: Some small number 0.00005
Like the SUM block this block will copy the input gradient dout equally to all it's inputs. So for all elements in X we will divide by N and multiply by dout.
Implementation
Python Forward Propagation
Python Backward Propagation
Matlab version forward propagation
function [activations] = ForwardPropagation(obj, input, weights, bias)
obj.previousInput = input;
% Tensor format (rows,cols,channels, batch) on matlab
% Get batch size
lenSizeActivations = length(size(input));
[~,D] = size(input);
if (lenSizeActivations < 3)
N = size(input,1);
else
N = size(input,ndims(input));
end
% Initialize for the first time running_mean and running_var
if isempty(obj.running_mean)
obj.running_mean = zeros(1,D);
obj.running_var = zeros(1,D);
end
if (obj.isTraining)
% Step1: Calculate mean on the batch
mu = (1/N) * sum(input,1);
% Step2: Subtract the mean from each column
obj.xmu = input - repmat(mu,N,1);
% Step3: Calculate denominator
sq = obj.xmu .^ 2;
% Step4: Calculate variance
obj.var = (1/N) * sum(sq,1);
% Step5: add eps for numerical stability, then sqrt
obj.sqrtvar = sqrt(obj.var + obj.eps);
% Step6: Invert the square root
obj.ivar = 1./obj.sqrtvar;
%Step7: Do normalization
obj.xhat = obj.xmu .* repmat(obj.ivar,N,1);
%Step8: Nor the two transformation steps
gammax = repmat(weights,N,1) .* obj.xhat;
% Step9: Adjust with bias (Batchnorm output)
activations = gammax + repmat(bias,N,1);
% Calculate running mean and variance to be used latter on
% prediction
obj.running_mean = (obj.momentum .* obj.running_mean) + (1.0 - obj.momentum) * mu;
obj.running_var = (obj.momentum .* obj.running_var) + (1.0 - obj.momentum) .* obj.var;
else
xbar = (input - repmat(obj.running_mean,N,1)) ./ repmat(sqrt(obj.running_var + obj.eps),N,1);
activations = (repmat(weights,N,1) .* xbar) + repmat(bias,N,1);
end
% Store stuff for backpropagation
obj.activations = activations;
obj.weights = weights;
obj.biases = bias;
end
As mentioned before the spatial batchnorm is used between CONV and Relu layers. To implement the spatial batchnorm we just call the normal batchnorm but with the input reshaped and permuted. Bellow we present the matlab version of forward and backward propagation of the spatial batchnorm.
% It's just a call to the normal batchnorm but with some
% permute/reshape on the input signal
function [activations] = ForwardPropagation(obj, input, weights, bias)
obj.previousInput = input;
[H,W,C,N] = size(input);
% Permute the dimensions to the following format
% (cols, channel, rows, batch)
% On python was: x.transpose((0,2,3,1))
% Python tensor format:
% (batch(0), channel(1), rows(2), cols(3))
% Matlab tensor format:
% (rows(1), cols(2), channel(3), batch(4))
inputTransposed = permute(input,[2,3,1,4]);
% Flat the input (On python the reshape is row-major)
inputFlat = reshape_row_major(inputTransposed,[(numel(inputTransposed) / C),C]);
% Call the forward propagation of normal batchnorm
activations = obj.normalBatchNorm.ForwardPropagation(inputFlat, weights, bias);
% Reshape/transpose back the signal, on python was (N,H,W,C)
activations_reshape = reshape_row_major(activations, [W,C,H,N]);
% On python was transpose(0,3,1,2)
activations = permute(activations_reshape,[3 1 2 4]);
% Store stuff for backpropagation
obj.activations = activations;
obj.weights = weights;
obj.biases = bias;
end
Now for the backpropagation we just reshape and permute again.
function [gradient] = BackwardPropagation(obj, dout)
% Observe that we use the same reshape/permutes from forward
% propagation
dout = dout.input;
[H,W,C,N] = size(dout);
% On python was: x.transpose((0,2,3,1))
dout_transp = permute(dout,[2,3,1,4]);
% Flat the input
dout_flat = reshape_row_major(dout_transp,[(numel(dout_transp) / C),C]);
% Call the backward propagation of normal batchnorm
gradDout.input = dout_flat;
gradient = obj.normalBatchNorm.BackwardPropagation(gradDout);
% Reshape/transpose back the signal, on python was (N,H,W,C)
gradient.input = reshape_row_major(gradient.input, [W,C,H,N]);
% On python was transpose(0,3,1,2)
gradient.input = permute(gradient.input,[3 1 2 4]);
end