GroupNorm(num_groups, num_channels, eps=1e-05, affine=True, device=None, dtype=None)¶
Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization
The input channels are separated into
num_groupsgroups, each containing
num_channels / num_groupschannels.
num_channelsmust be divisible by
num_groups. The mean and standard-deviation are calculated separately over the each group. and are learnable per-channel affine transform parameter vectors of size
True. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).
This layer uses statistics computed from input data in both training and evaluation modes.
num_groups (int) – number of groups to separate the channels into
num_channels (int) – number of channels expected in input
eps – a value added to the denominator for numerical stability. Default: 1e-5
affine – a boolean value that when set to
True, this module has learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default:
Output: (same shape as input)
>>> input = torch.randn(20, 6, 10, 10) >>> # Separate 6 channels into 3 groups >>> m = nn.GroupNorm(3, 6) >>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm) >>> m = nn.GroupNorm(6, 6) >>> # Put all 6 channels into a single group (equivalent with LayerNorm) >>> m = nn.GroupNorm(1, 6) >>> # Activating the module >>> output = m(input)