torch.nn.utils.clip_grads_with_norm_¶
- torch.nn.utils.clip_grads_with_norm_(parameters, max_norm, total_norm, foreach=None)¶
Scale the gradients of an iterable of parameters given a pre-calculated total norm and desired max norm.
The gradients will be scaled by the following calculation
Gradients are modified in-place.
This function is equivalent to
torch.nn.utils.clip_grad_norm_()
with a pre-calculated total norm.- Parameters
parameters (Iterable[Tensor] or Tensor) – an iterable of Tensors or a single Tensor that will have gradients normalized
max_norm (float) – max norm of the gradients
total_norm (Tensor) – total norm of the gradients to use for clipping
foreach (bool) – use the faster foreach-based implementation. If
None
, use the foreach implementation for CUDA and CPU native tensors and silently fall back to the slow implementation for other device types. Default:None
- Returns
None
- Return type
None