Shortcuts

vec_generalized_advantage_estimate

class torchrl.objectives.value.functional.vec_generalized_advantage_estimate(gamma: Union[float, torch.Tensor], lmbda: Union[float, torch.Tensor], state_value: torch.Tensor, next_state_value: torch.Tensor, reward: torch.Tensor, done: torch.Tensor, terminated: torch.Tensor | None = None, *, time_dim: int = - 2)[source]

Vectorized Generalized advantage estimate of a trajectory.

Refer to “HIGH-DIMENSIONAL CONTINUOUS CONTROL USING GENERALIZED ADVANTAGE ESTIMATION” https://arxiv.org/pdf/1506.02438.pdf for more context.

Parameters:
  • gamma (scalar) – exponential mean discount.

  • lmbda (scalar) – trajectory discount.

  • state_value (Tensor) – value function result with old_state input.

  • next_state_value (Tensor) – value function result with new_state input.

  • reward (Tensor) – reward of taking actions in the environment.

  • done (Tensor) – boolean flag for end of trajectory.

  • terminated (Tensor) – boolean flag for the end of episode. Defaults to done if not provided.

  • time_dim (int) – dimension where the time is unrolled. Defaults to -2.

All tensors (values, reward and done) must have shape [*Batch x TimeSteps x *F], with *F feature dimensions.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources