- class torchrl.objectives.ClipPPOLoss(*args, **kwargs)[source]¶
Clipped PPO loss.
- The clipped importance weighted loss is computed as follows:
loss = -min( weight * advantage, min(max(weight, 1-eps), 1+eps) * advantage)
actor (ProbabilisticTensorDictSequential) – policy operator.
critic (ValueOperator) – value operator.
- Keyword Arguments:
advantage_key (str, optional) – the input tensordict key where the advantage is expected to be written. Defaults to
value_target_key (str, optional) – the input tensordict key where the target state value is expected to be written. Defaults to
value_key (str, optional) – the input tensordict key where the state value is expected to be written. Defaults to
clip_epsilon (scalar, optional) – weight clipping threshold in the clipped PPO loss equation. default: 0.2
entropy_bonus (bool, optional) – if
True, an entropy bonus will be added to the loss to favour exploratory policies.
samples_mc_entropy (int, optional) – if the distribution retrieved from the policy operator does not have a closed form formula for the entropy, a Monte-Carlo estimate will be used.
samples_mc_entropywill control how many samples will be used to compute this estimate. Defaults to
entropy_coef (scalar, optional) – entropy multiplier when computing the total loss. Defaults to
critic_coef (scalar, optional) – critic loss multiplier when computing the total loss. Defaults to
loss_critic_type (str, optional) – loss function for the value discrepancy. Can be one of “l1”, “l2” or “smooth_l1”. Defaults to
normalize_advantage (bool, optional) – if
True, the advantage will be normalized before being used. Defaults to
separate_losses (bool, optional) – if
True, shared parameters between policy and critic will only be trained on the policy loss. Defaults to
False, ie. gradients are propagated to shared parameters for both policy and critic losses.
If the actor and the value function share parameters, one can avoid calling the common module multiple times by passing only the head of the value network to the PPO loss module:
>>> common = SomeModule(in_keys=["observation"], out_keys=["hidden"]) >>> actor_head = SomeActor(in_keys=["hidden"]) >>> value_head = SomeValue(in_keys=["hidden"]) >>> # first option, with 2 calls on the common module >>> model = ActorCriticOperator(common, actor_head, value_head) >>> loss_module = PPOLoss(model.get_policy_operator(), model.get_value_operator()) >>> # second option, with a single call to the common module >>> loss_module = PPOLoss(ProbabilisticTensorDictSequential(model, actor_head), value_head)
This will work regardless of whether separate_losses is activated or not.
- forward(tensordict: TensorDictBase) TensorDictBase [source]¶
It is designed to read an input TensorDict and return another tensordict with loss keys named “loss*”.
Splitting the loss in its component can then be used by the trainer to log the various loss values throughout training. Other scalars present in the output tensordict will be logged too.
tensordict – an input tensordict with the values required to compute the loss.
A new tensordict with no batch dimension containing various loss scalars which will be named “loss*”. It is essential that the losses are returned with this name as they will be read by the trainer before backpropagation.