Rate this Page

ChainedScheduler#

class torch.optim.lr_scheduler.ChainedScheduler(schedulers, optimizer=None)[source]#

Chains a list of learning rate schedulers.

Takes in a sequence of chainable learning rate schedulers and calls their step() functions consecutively in just one call to step().

Parameters
  • schedulers (sequence) – sequence of chained schedulers.

  • optimizer (Optimizer, optional) – Wrapped optimizer. Default: None.

Example

>>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.05      if epoch == 0
>>> # lr = 0.0450    if epoch == 1
>>> # lr = 0.0405    if epoch == 2
>>> # ...
>>> # lr = 0.00675   if epoch == 19
>>> # lr = 0.06078   if epoch == 20
>>> # lr = 0.05470   if epoch == 21
>>> scheduler1 = ConstantLR(optimizer, factor=0.1, total_iters=20)
>>> scheduler2 = ExponentialLR(optimizer, gamma=0.9)
>>> scheduler = ChainedScheduler([scheduler1, scheduler2], optimizer=optimizer)
>>> for epoch in range(100):
>>>     train(...)
>>>     validate(...)
>>>     scheduler.step()
../_images/ChainedScheduler.png
get_last_lr()[source]#

Return last computed learning rate by current scheduler.

Return type

list[float]

get_lr()[source]#

Compute learning rate using chainable form of the scheduler.

Return type

list[float]

load_state_dict(state_dict)[source]#

Load the scheduler’s state.

Parameters

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

state_dict()[source]#

Return the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The wrapped scheduler states will also be saved.

Return type

dict[str, Any]

step()[source]#

Perform a step.