- class torch.optim.Rprop(params, lr=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50), foreach=None, maximize=False)¶
Implements the resilient backpropagation algorithm.
For further details regarding the algorithm we refer to the paper A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm.
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-2)
foreach (bool, optional) – whether foreach implementation of optimizer is used (default: None)
maximize (bool, optional) – maximize the params based on the objective, instead of minimizing (default: False)
Add a param group to the
This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the
Optimizeras training progresses.
param_group (dict) – Specifies what Tensors should be optimized along with group specific optimization options.
Loads the optimizer state.
Returns the state of the optimizer as a
It contains two entries:
- state - a dict holding current optimization state. Its content
differs between optimizer classes.
- param_groups - a list containing all parameter groups where each
parameter group is a dict
Performs a single optimization step.
closure (Callable, optional) – A closure that reevaluates the model and returns the loss.
Sets the gradients of all optimized
torch.Tensors to zero.
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests
zero_grad(set_to_none=True)followed by a backward pass,
.grads are guaranteed to be None for params that did not receive a gradient. 3.
torch.optimoptimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).