Shortcuts

vec_td1_return_estimate

class torchrl.objectives.value.functional.vec_td1_return_estimate(gamma, next_state_value, reward, done: Tensor, terminated: torch.Tensor | None = None, rolling_gamma: Optional[bool] = None, time_dim: int = - 2)[source]

Vectorized TD(1) return estimate.

Parameters:
  • gamma (scalar, Tensor) – exponential mean discount. If tensor-valued,

  • next_state_value (Tensor) – value function result with new_state input.

  • reward (Tensor) – reward of taking actions in the environment.

  • done (Tensor) – boolean flag for end of trajectory.

  • terminated (Tensor) – boolean flag for the end of episode. Defaults to done if not provided.

  • rolling_gamma (bool, optional) –

    if True, it is assumed that each gamma of the gamma tensor is tied to a single event:

    >>> gamma = [g1, g2, g3, g4]
    >>> value = [v1, v2, v3, v4]
    >>> return = [
    ...   v1 + g1 v2 + g1 g2 v3 + g1 g2 g3 v4,
    ...   v2 + g2 v3 + g2 g3 v4,
    ...   v3 + g3 v4,
    ...   v4,
    ... ]
    

    if False, it is assumed that each gamma is tied to the upcoming trajectory:

    >>> gamma = [g1, g2, g3, g4]
    >>> value = [v1, v2, v3, v4]
    >>> return = [
    ...   v1 + g1 v2 + g1**2 v3 + g**3 v4,
    ...   v2 + g2 v3 + g2**2 v4,
    ...   v3 + g3 v4,
    ...   v4,
    ... ]
    

    Default is True.

  • time_dim (int) – dimension where the time is unrolled. Defaults to -2.

All tensors (values, reward and done) must have shape [*Batch x TimeSteps x *F], with *F feature dimensions.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources