SequentialLR¶
- class torch.optim.lr_scheduler.SequentialLR(optimizer, schedulers, milestones, last_epoch=-1, verbose='deprecated')[source]¶
Receives the list of schedulers that is expected to be called sequentially during optimization process and milestone points that provides exact intervals to reflect which scheduler is supposed to be called at a given epoch.
- Parameters
optimizer (Optimizer) – Wrapped optimizer.
schedulers (list) – List of chained schedulers.
milestones (list) – List of integers that reflects milestone points.
last_epoch (int) – The index of last epoch. Default: -1.
Does nothing.
Deprecated since version 2.2:
verbose
is deprecated. Please useget_last_lr()
to access the learning rate.
Example
>>> # Assuming optimizer uses lr = 1. for all groups >>> # lr = 0.1 if epoch == 0 >>> # lr = 0.1 if epoch == 1 >>> # lr = 0.9 if epoch == 2 >>> # lr = 0.81 if epoch == 3 >>> # lr = 0.729 if epoch == 4 >>> scheduler1 = ConstantLR(optimizer, factor=0.1, total_iters=2) >>> scheduler2 = ExponentialLR(optimizer, gamma=0.9) >>> scheduler = SequentialLR(optimizer, schedulers=[scheduler1, scheduler2], milestones=[2]) >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step()
- load_state_dict(state_dict)[source]¶
Loads the schedulers state.
- Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to
state_dict()
.
- print_lr(is_verbose, group, lr, epoch=None)¶
Display the current learning rate.
Deprecated since version 2.4:
print_lr()
is deprecated. Please useget_last_lr()
to access the learning rate.