Shortcuts

DTLoss

class torchrl.objectives.DTLoss(*args, **kwargs)[source]

TorchRL implementation of the Online Decision Transformer loss.

Presented in “Decision Transformer: Reinforcement Learning via Sequence Modeling” <https://arxiv.org/abs/2106.01345>

Parameters:

actor_network (ProbabilisticActor) – stochastic actor

Keyword Arguments:
  • loss_function (str) – loss function to use. Defaults to "l2".

  • reduction (str, optional) – Specifies the reduction to apply to the output: "none" | "mean" | "sum". "none": no reduction will be applied, "mean": the sum of the output will be divided by the number of elements in the output, "sum": the output will be summed. Default: "mean".

forward(tensordict: TensorDictBase = None) TensorDictBase[source]

Compute the loss for the Online Decision Transformer.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources