SparseAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08)¶
Implements lazy version of Adam algorithm suitable for sparse tensors.
In this variant, only moments that show up in the gradient get updated, and only those portions of the gradient get applied to the parameters.
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
Add a param group to the
This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the
Optimizeras training progresses.
param_group (dict) – Specifies what Tensors should be optimized along with group
optimization options. (specific) –
Loads the optimizer state.
Returns the state of the optimizer as a
It contains two entries:
- state - a dict holding current optimization state. Its content
differs between optimizer classes.
- param_groups - a list containing all parameter groups where each
parameter group is a dict
Performs a single optimization step.
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
Sets the gradients of all optimized
torch.Tensors to zero.
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests
zero_grad(set_to_none=True)followed by a backward pass,
.grads are guaranteed to be None for params that did not receive a gradient. 3.
torch.optimoptimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).