vec_td_lambda_return_estimate¶
- class torchrl.objectives.value.functional.vec_td_lambda_return_estimate(gamma, lmbda, next_state_value, reward, done, rolling_gamma: Optional[bool] = None, time_dim: int = - 2)[source]¶
Vectorized TD(\(\lambda\)) return estimate.
- Parameters:
gamma (scalar, Tensor) – exponential mean discount. If tensor-valued, must be a [Batch x TimeSteps x 1] tensor.
lmbda (scalar) – trajectory discount.
next_state_value (Tensor) – value function result with new_state input. must be a [Batch x TimeSteps x 1] tensor
reward (Tensor) – reward of taking actions in the environment. must be a [Batch x TimeSteps x 1] or [Batch x TimeSteps] tensor
done (Tensor) – boolean flag for end of episode.
rolling_gamma (bool, optional) –
if
True
, it is assumed that each gamma if a gamma tensor is tied to a single event:gamma = [g1, g2, g3, g4] value = [v1, v2, v3, v4] return = [
v1 + g1 v2 + g1 g2 v3 + g1 g2 g3 v4, v2 + g2 v3 + g2 g3 v4, v3 + g3 v4, v4,
]
if False, it is assumed that each gamma is tied to the upcoming trajectory:
gamma = [g1, g2, g3, g4] value = [v1, v2, v3, v4] return = [
v1 + g1 v2 + g1**2 v3 + g**3 v4, v2 + g2 v3 + g2**2 v4, v3 + g3 v4, v4,
]
Default is True.
time_dim (int) – dimension where the time is unrolled. Defaults to -2.
All tensors (values, reward and done) must have shape
[*Batch x TimeSteps x *F]
, with*F
feature dimensions.