SequentialLR¶
- class torch.optim.lr_scheduler.SequentialLR(optimizer, schedulers, milestones, last_epoch=-1, verbose='deprecated')[source]¶
Contains a list of schedulers expected to be called sequentially during the optimization process.
Specifically, the schedulers will be called according to the milestone points, which should provide exact intervals by which each scheduler should be called at a given epoch.
- Parameters
optimizer (Optimizer) – Wrapped optimizer.
schedulers (list) – List of chained schedulers.
milestones (list) – List of integers that reflects milestone points.
last_epoch (int) – The index of last epoch. Default: -1.
Does nothing.
Deprecated since version 2.2:
verbose
is deprecated. Please useget_last_lr()
to access the learning rate.
Example
>>> # Assuming optimizer uses lr = 1. for all groups >>> # lr = 0.1 if epoch == 0 >>> # lr = 0.1 if epoch == 1 >>> # lr = 0.9 if epoch == 2 >>> # lr = 0.81 if epoch == 3 >>> # lr = 0.729 if epoch == 4 >>> scheduler1 = ConstantLR(optimizer, factor=0.1, total_iters=2) >>> scheduler2 = ExponentialLR(optimizer, gamma=0.9) >>> scheduler = SequentialLR(optimizer, schedulers=[scheduler1, scheduler2], milestones=[2]) >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step()
- load_state_dict(state_dict)[source]¶
Load the scheduler’s state.
- Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to
state_dict()
.
- print_lr(is_verbose, group, lr, epoch=None)¶
Display the current learning rate.
Deprecated since version 2.4:
print_lr()
is deprecated. Please useget_last_lr()
to access the learning rate.
- recursive_undo(sched=None)[source]¶
Recursively undo any step performed by the initialisation of schedulers.