Shortcuts

SequentialLR

class torch.optim.lr_scheduler.SequentialLR(optimizer, schedulers, milestones, last_epoch=-1, verbose='deprecated')[source]

Contains a list of schedulers expected to be called sequentially during the optimization process.

Specifically, the schedulers will be called according to the milestone points, which should provide exact intervals by which each scheduler should be called at a given epoch.

Parameters
  • optimizer (Optimizer) – Wrapped optimizer.

  • schedulers (list) – List of chained schedulers.

  • milestones (list) – List of integers that reflects milestone points.

  • last_epoch (int) – The index of last epoch. Default: -1.

  • verbose (bool | str) –

    Does nothing.

    Deprecated since version 2.2: verbose is deprecated. Please use get_last_lr() to access the learning rate.

Example

>>> # Assuming optimizer uses lr = 1. for all groups
>>> # lr = 0.1     if epoch == 0
>>> # lr = 0.1     if epoch == 1
>>> # lr = 0.9     if epoch == 2
>>> # lr = 0.81    if epoch == 3
>>> # lr = 0.729   if epoch == 4
>>> scheduler1 = ConstantLR(optimizer, factor=0.1, total_iters=2)
>>> scheduler2 = ExponentialLR(optimizer, gamma=0.9)
>>> scheduler = SequentialLR(optimizer, schedulers=[scheduler1, scheduler2], milestones=[2])
>>> for epoch in range(100):
>>>     train(...)
>>>     validate(...)
>>>     scheduler.step()
get_last_lr()

Return last computed learning rate by current scheduler.

Return type

List[float]

get_lr()

Compute learning rate using chainable form of the scheduler.

Return type

List[float]

load_state_dict(state_dict)[source]

Load the scheduler’s state.

Parameters

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

Deprecated since version 2.4: print_lr() is deprecated. Please use get_last_lr() to access the learning rate.

state_dict()[source]

Return the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The wrapped scheduler states will also be saved.

step()[source]

Perform a step.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources