# ConstantLR¶

class torch.optim.lr_scheduler.ConstantLR(optimizer, factor=0.3333333333333333, total_iters=5, last_epoch=- 1, verbose=False)[source]

Decays the learning rate of each parameter group by a small constant factor until the number of epoch reaches a pre-defined milestone: total_iters. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When last_epoch=-1, sets initial lr as lr.

Parameters
• optimizer (Optimizer) – Wrapped optimizer.

• factor (float) – The number we multiply learning rate until the milestone. Default: 1./3.

• total_iters (int) – The number of steps that the scheduler decays the learning rate. Default: 5.

• last_epoch (int) – The index of the last epoch. Default: -1.

• verbose (bool) – If True, prints a message to stdout for each update. Default: False.

Example

>>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.025   if epoch == 0
>>> # lr = 0.025   if epoch == 1
>>> # lr = 0.025   if epoch == 2
>>> # lr = 0.025   if epoch == 3
>>> # lr = 0.05    if epoch >= 4
>>> scheduler = ConstantLR(self.opt, factor=0.5, total_iters=4)
>>> for epoch in range(100):
>>>     train(...)
>>>     validate(...)
>>>     scheduler.step()

get_last_lr()

Return last computed learning rate by current scheduler.

load_state_dict(state_dict)

Parameters

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer.