Shortcuts

LambdaLR

class torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1, verbose='deprecated')[source]

Sets the initial learning rate.

The learning rate of each parameter group is set to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr.

Parameters
  • optimizer (Optimizer) – Wrapped optimizer.

  • lr_lambda (function or list) – A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param_groups.

  • last_epoch (int) – The index of last epoch. Default: -1.

  • verbose (bool | str) –

    If True, prints a message to stdout for each update. Default: False.

    Deprecated since version 2.2: verbose is deprecated. Please use get_last_lr() to access the learning rate.

Example

>>> # Assuming optimizer has two groups.
>>> lambda1 = lambda epoch: epoch // 30
>>> lambda2 = lambda epoch: 0.95 ** epoch
>>> scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])
>>> for epoch in range(100):
>>>     train(...)
>>>     validate(...)
>>>     scheduler.step()
get_last_lr()

Return last computed learning rate by current scheduler.

Return type

List[float]

get_lr()[source]

Compute learning rate.

load_state_dict(state_dict)[source]

Load the scheduler’s state.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

Parameters

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

Deprecated since version 2.4: print_lr() is deprecated. Please use get_last_lr() to access the learning rate.

state_dict()[source]

Return the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

step(epoch=None)

Perform a step.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources