td0_advantage_estimate¶
- class torchrl.objectives.value.functional.td0_advantage_estimate(gamma: float, state_value: Tensor, next_state_value: Tensor, reward: Tensor, done: Tensor)[source]¶
TD(0) advantage estimate of a trajectory.
Also known as bootstrapped Temporal Difference or one-step return.
- Parameters:
gamma (scalar) – exponential mean discount.
state_value (Tensor) – value function result with old_state input.
next_state_value (Tensor) – value function result with new_state input.
reward (Tensor) – reward of taking actions in the environment.
done (Tensor) – boolean flag for end of episode.
All tensors (values, reward and done) must have shape
[*Batch x TimeSteps x *F]
, with*F
feature dimensions.