Shortcuts

tensordict.nn.distributions.AddStateIndependentNormalScale

class tensordict.nn.distributions.AddStateIndependentNormalScale(scale_shape: Union[Size, int, tuple], scale_mapping: str = 'exp', scale_lb: Number = 0.0001)

A nn.Module that adds trainable state-independent scale parameters.

The scale parameters are mapped onto positive values using the specified scale_mapping.

Parameters:
  • scale_mapping (str, optional) – positive mapping function to be used with the std. default = “biased_softplus_1.0” (i.e. softplus map with bias such that fn(0.0) = 1.0) choices: “softplus”, “exp”, “relu”, “biased_softplus_1”;

  • scale_lb (Number, optional) – The minimum value that the variance can take. Default is 1e-4.

Examples

>>> from torch import nn
>>> import torch
>>> num_outputs = 4
>>> module = nn.Linear(3, num_outputs)
>>> module_normal = AddStateIndependentNormalScale(num_outputs)
>>> tensor = torch.randn(3)
>>> loc, scale = module_normal(module(tensor))
>>> print(loc.shape, scale.shape)
torch.Size([4]) torch.Size([4])
>>> assert (scale > 0).all()
>>> # with modules that return more than one tensor
>>> module = nn.LSTM(3, num_outputs)
>>> module_normal = AddStateIndependentNormalScale(num_outputs)
>>> tensor = torch.randn(4, 2, 3)
>>> loc, scale, others = module_normal(*module(tensor))
>>> print(loc.shape, scale.shape)
torch.Size([4, 2, 4]) torch.Size([4, 2, 4])
>>> assert (scale > 0).all()

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources