class torchrl.modules.IndependentNormal(loc: Tensor, scale: Tensor, upscale: float = 5.0, tanh_loc: bool = False, event_dim: int = 1, **kwargs)[source]

Implements a Normal distribution with location scaling.

Location scaling prevents the location to be “too far” from 0, which ultimately leads to numerically unstable samples and poor gradient computation (e.g. gradient explosion). In practice, the location is computed according to

\[loc = tanh(loc / upscale) * upscale.\]

This behaviour can be disabled by switching off the tanh_loc parameter (see below).

  • loc (torch.Tensor) – normal distribution location parameter

  • scale (torch.Tensor) – normal distribution sigma parameter (squared root of variance)

  • upscale (torch.Tensor or number, optional) –

    ‘a’ scaling factor in the formula:

    \[loc = tanh(loc / upscale) * upscale.\]

    Default is 5.0

  • tanh_loc (bool, optional) – if False, the above formula is used for the location scaling, otherwise the raw value is kept. Default is False;

property mode

Returns the mode of the distribution.


Access comprehensive developer documentation for PyTorch

View Docs


Get in-depth tutorials for beginners and advanced developers

View Tutorials


Find development resources and get your questions answered

View Resources