Shortcuts

td0_return_estimate

class torchrl.objectives.value.functional.td0_return_estimate(gamma: float, next_state_value: Tensor, reward: Tensor, terminated: Optional[Tensor] = None, *, done: Optional[Tensor] = None)[source]

TD(0) discounted return estimate of a trajectory.

Also known as bootstrapped Temporal Difference or one-step return.

Parameters:
  • gamma (scalar) – exponential mean discount.

  • next_state_value (Tensor) – value function result with new_state input. must be a [Batch x TimeSteps x 1] or [Batch x TimeSteps] tensor

  • reward (Tensor) – reward of taking actions in the environment. must be a [Batch x TimeSteps x 1] or [Batch x TimeSteps] tensor

  • terminated (Tensor) – boolean flag for the end of episode. Defaults to done if not provided.

Keyword Arguments:

done (Tensor) – Deprecated. Use terminated instead.

All tensors (values, reward and done) must have shape [*Batch x TimeSteps x *F], with *F feature dimensions.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources