generalized_advantage_estimate¶
- class torchrl.objectives.value.functional.generalized_advantage_estimate(gamma: float, lmbda: float, state_value: Tensor, next_state_value: Tensor, reward: Tensor, done: Tensor, terminated: torch.Tensor | None = None, *, time_dim: int = - 2)[source]¶
Generalized advantage estimate of a trajectory.
Refer to “HIGH-DIMENSIONAL CONTINUOUS CONTROL USING GENERALIZED ADVANTAGE ESTIMATION” https://arxiv.org/pdf/1506.02438.pdf for more context.
- Parameters:
gamma (scalar) – exponential mean discount.
lmbda (scalar) – trajectory discount.
state_value (Tensor) – value function result with old_state input.
next_state_value (Tensor) – value function result with new_state input.
reward (Tensor) – reward of taking actions in the environment.
done (Tensor) – boolean flag for end of trajectory.
terminated (Tensor) – boolean flag for the end of episode. Defaults to
done
if not provided.time_dim (int) – dimension where the time is unrolled. Defaults to -2.
All tensors (values, reward and done) must have shape
[*Batch x TimeSteps x *F]
, with*F
feature dimensions.