class torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0)[source]

Parameters
• params (iterable) – iterable of parameters to optimize or dicts defining parameter groups

• rho (float, optional) – coefficient used for computing a running average of squared gradients (default: 0.9)

• eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-6)

• lr (float, optional) – coefficient that scale delta before it is applied to the parameters (default: 1.0)

• weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)

add_param_group(param_group)

Add a param group to the Optimizer s param_groups.

This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses.

Parameters
• param_group (dict) – Specifies what Tensors should be optimized along with group

• optimization options. (specific) –

load_state_dict(state_dict)

Parameters

state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict().

state_dict()

Returns the state of the optimizer as a dict.

It contains two entries:

• state - a dict holding current optimization state. Its content

differs between optimizer classes.

• param_groups - a dict containing all parameter groups

step(closure=None)[source]

Performs a single optimization step.

Parameters

closure (callable, optional) – A closure that reevaluates the model and returns the loss.

zero_grad(set_to_none=False)

Sets the gradients of all optimized torch.Tensor s to zero.

Parameters

set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests zero_grad(set_to_none=True) followed by a backward pass, .grads are guaranteed to be None for params that did not receive a gradient. 3. torch.optim optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).