td0_return_estimate¶
- class torchrl.objectives.value.functional.td0_return_estimate(gamma: float, next_state_value: torch.Tensor, reward: torch.Tensor, terminated: torch.Tensor | None = None, *, done: torch.Tensor | None = None)[source]¶
TD(0) discounted return estimate of a trajectory.
Also known as bootstrapped Temporal Difference or one-step return.
- Parameters:
gamma (scalar) – exponential mean discount.
next_state_value (Tensor) – value function result with new_state input. must be a [Batch x TimeSteps x 1] or [Batch x TimeSteps] tensor
reward (Tensor) – reward of taking actions in the environment. must be a [Batch x TimeSteps x 1] or [Batch x TimeSteps] tensor
terminated (Tensor) – boolean flag for the end of episode. Defaults to
done
if not provided.
- Keyword Arguments:
done (Tensor) – Deprecated. Use
terminated
instead.
All tensors (values, reward and done) must have shape
[*Batch x TimeSteps x *F]
, with*F
feature dimensions.