Shortcuts

create_lr_scheduler_with_warmup#

ignite.handlers.param_scheduler.create_lr_scheduler_with_warmup(lr_scheduler, warmup_start_value, warmup_duration, warmup_end_value=None, save_history=False, output_simulated_values=None)[source]#

Helper method to create a learning rate scheduler with a linear warm-up.

Parameters
  • lr_scheduler (Union[ParamScheduler, _LRScheduler]) – learning rate scheduler after the warm-up.

  • warmup_start_value (float) – learning rate start value of the warm-up phase.

  • warmup_duration (int) – warm-up phase duration, number of events.

  • warmup_end_value (Optional[float]) – learning rate end value of the warm-up phase, (default=None). If None, warmup_end_value is set to optimizer initial lr.

  • save_history (bool) – whether to log the parameter values to engine.state.param_history, (default=False).

  • output_simulated_values (Optional[List]) – optional output of simulated learning rate values. If output_simulated_values is a list of None, e.g. [None] * 100, after the execution it will be filled by 100 simulated learning rate values.

Returns

ConcatScheduler

Return type

ConcatScheduler

Note

If the first learning rate value provided by lr_scheduler is different from warmup_end_value, an additional event is added after the warm-up phase such that the warm-up ends with warmup_end_value value and then lr_scheduler provides its learning rate values as normally.

Examples

from torch.optim.lr_scheduler import ExponentialLR

torch_lr_scheduler = ExponentialLR(optimizer=default_optimizer, gamma=0.98)

scheduler = create_lr_scheduler_with_warmup(torch_lr_scheduler,
                                            warmup_start_value=0.0,
                                            warmup_end_value=0.1,
                                            warmup_duration=3)

default_trainer.add_event_handler(Events.ITERATION_COMPLETED, scheduler)

@default_trainer.on(Events.ITERATION_COMPLETED)
def print_lr():
    print(default_optimizer.param_groups[0]["lr"])

default_trainer.run([0] * 8, max_epochs=1)
0.0
0.05
0.1
0.098
0.09604
0.09411...
0.09223...
0.09039...

New in version 0.4.5.