DreamerActorLoss¶
- class torchrl.objectives.DreamerActorLoss(*args, **kwargs)[source]¶
Dreamer Actor Loss.
Computes the loss of the dreamer actor. The actor loss is computed as the negative average lambda return.
Reference: https://arxiv.org/abs/1912.01603.
- Parameters:
actor_model (TensorDictModule) – the actor model.
value_model (TensorDictModule) – the value model.
model_based_env (DreamerEnv) – the model based environment.
imagination_horizon (int, optional) – The number of steps to unroll the model. Defaults to
15
.discount_loss (bool, optional) – if
True
, the loss is discounted with a gamma discount factor. Default toFalse
.
- forward(tensordict: TensorDict) Tuple[TensorDict, TensorDict] [source]¶
It is designed to read an input TensorDict and return another tensordict with loss keys named “loss*”.
Splitting the loss in its component can then be used by the trainer to log the various loss values throughout training. Other scalars present in the output tensordict will be logged too.
- Parameters:
tensordict – an input tensordict with the values required to compute the loss.
- Returns:
A new tensordict with no batch dimension containing various loss scalars which will be named “loss*”. It is essential that the losses are returned with this name as they will be read by the trainer before backpropagation.
- make_value_estimator(value_type: Optional[ValueEstimators] = None, **hyperparams)[source]¶
Value-function constructor.
If the non-default value function is wanted, it must be built using this method.
- Parameters:
value_type (ValueEstimators) – A
ValueEstimators
enum type indicating the value function to use. If none is provided, the default stored in thedefault_value_estimator
attribute will be used. The resulting value estimator class will be registered inself.value_type
, allowing future refinements.**hyperparams – hyperparameters to use for the value function. If not provided, the value indicated by
default_value_kwargs()
will be used.
Examples
>>> from torchrl.objectives import DQNLoss >>> # initialize the DQN loss >>> actor = torch.nn.Linear(3, 4) >>> dqn_loss = DQNLoss(actor, action_space="one-hot") >>> # updating the parameters of the default value estimator >>> dqn_loss.make_value_estimator(gamma=0.9) >>> dqn_loss.make_value_estimator( ... ValueEstimators.TD1, ... gamma=0.9) >>> # if we want to change the gamma value >>> dqn_loss.make_value_estimator(dqn_loss.value_type, gamma=0.9)