Shortcuts

RewardScaling

class torchrl.envs.transforms.RewardScaling(loc: Union[float, torch.Tensor], scale: Union[float, torch.Tensor], in_keys: Sequence[NestedKey] | None = None, out_keys: Sequence[NestedKey] | None = None, standard_normal: bool = False)[source]

Affine transform of the reward.

The reward is transformed according to:

\[reward = reward * scale + loc\]
Parameters:
  • loc (number or torch.Tensor) – location of the affine transform

  • scale (number or torch.Tensor) – scale of the affine transform

  • standard_normal (bool, optional) –

    if True, the transform will be

    \[reward = (reward-loc)/scale\]

    as it is done for standardization. Default is False.

transform_reward_spec(reward_spec: TensorSpec) TensorSpec[source]

Transforms the reward spec such that the resulting spec matches transform mapping.

Parameters:

reward_spec (TensorSpec) – spec before the transform

Returns:

expected spec after the transform

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources