BatchRenorm1d¶
- class torchrl.modules.BatchRenorm1d(num_features: int, *, momentum: float = 0.01, eps: float = 1e-05, max_r: float = 3.0, max_d: float = 5.0, warmup_steps: int = 10000, smooth: bool = False)[source]¶
BatchRenorm Module (https://arxiv.org/abs/1702.03275).
The code is adapted from https://github.com/google-research/corenet
BatchRenorm is an enhanced version of the standard BatchNorm. Unlike BatchNorm, it utilizes running statistics to normalize batches after an initial warmup phase. This approach reduces the impact of “outlier” batches that may occur during extended training periods, making BatchRenorm more robust for long training runs.
During the warmup phase, BatchRenorm functions identically to a BatchNorm layer.
- Parameters:
num_features (int) – Number of features in the input tensor.
- Keyword Arguments:
momentum (
float
, optional) – Momentum factor for computing the running mean and variance. Defaults to0.01
.eps (
float
, optional) – Small value added to the variance to avoid division by zero. Defaults to1e-5
.max_r (
float
, optional) – Maximum value for the scaling factor r. Defaults to3.0
.max_d (
float
, optional) – Maximum value for the bias factor d. Defaults to5.0
.warmup_steps (int, optional) – Number of warm-up steps for the running mean and variance. Defaults to
10000
.smooth (bool, optional) – if
True
, the behavior smoothly transitions from regular batch-norm (wheniter=0
) to batch-renorm (wheniter=warmup_steps
). Otherwise, the behavior will transition from batch-norm to batch-renorm wheniter=warmup_steps
. Defaults toFalse
.
- forward(x: Tensor) Tensor [source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.