td1_return_estimate¶
- class torchrl.objectives.value.functional.td1_return_estimate(gamma: float, next_state_value: torch.Tensor, reward: torch.Tensor, done: torch.Tensor, terminated: torch.Tensor | None = None, rolling_gamma: bool = None, *, time_dim: int = - 2)[source]¶
TD(1) return estimate.
- Parameters:
gamma (scalar) – exponential mean discount.
next_state_value (Tensor) – value function result with new_state input.
reward (Tensor) – reward of taking actions in the environment.
done (Tensor) – boolean flag for end of trajectory.
terminated (Tensor) – boolean flag for the end of episode. Defaults to
done
if not provided.rolling_gamma (bool, optional) –
if
True
, it is assumed that each gamma if a gamma tensor is tied to a single event:gamma = [g1, g2, g3, g4] value = [v1, v2, v3, v4] return = [
v1 + g1 v2 + g1 g2 v3 + g1 g2 g3 v4, v2 + g2 v3 + g2 g3 v4, v3 + g3 v4, v4,
]
if False, it is assumed that each gamma is tied to the upcoming trajectory:
gamma = [g1, g2, g3, g4] value = [v1, v2, v3, v4] return = [
v1 + g1 v2 + g1**2 v3 + g**3 v4, v2 + g2 v3 + g2**2 v4, v3 + g3 v4, v4,
]
Default is True.
time_dim (int) – dimension where the time is unrolled. Defaults to -2.
All tensors (values, reward and done) must have shape
[*Batch x TimeSteps x *F]
, with*F
feature dimensions.