Shortcuts

ReduceLROnPlateauScheduler#

class ignite.handlers.param_scheduler.ReduceLROnPlateauScheduler(optimizer, metric_name, trainer=None, save_history=False, param_group_index=None, **scheduler_kwargs)[source]#

Reduce LR when a metric stops improving. Wrapper of torch.optim.lr_scheduler.ReduceLROnPlateau.

Parameters
  • optimizer (Optimizer) – Wrapped optimizer.

  • metric_name (str) – metric whose improvement is monitored. Must be attached to the same engine.

  • trainer (Optional[Engine]) – Trainer engine to log LR history in its state.output.param_history. Is used if save_history is true. Default: None.

  • save_history (bool) – Whether to save history or not. If true, history will be logged in trainer’s state.output.param_history. Default: False.

  • param_group_index (Optional[int]) – optimizer’s parameters group to use. Default: None. Use all optimizer’s paramater groups.

  • scheduler_kwargs (Any) – Keyword arguments to be passed to the wrapped ReduceLROnPlateau.

Examples

# Metric "accuracy" should increase the best value by
# more than 1 unit after at most 2 epochs, otherwise LR
# would get multiplied by 0.5 .

scheduler = ReduceLROnPlateauScheduler(
    default_optimizer,
    metric_name="accuracy", mode="max",
    factor=0.5, patience=1, threshold_mode='abs',
    threshold=1, trainer=trainer
)

metric = Accuracy()
default_evaluator.attach(metric, "accuracy")

default_evaluator.add_event_handler(Events.COMPLETED, scheduler)
from collections import OrderedDict

import torch
from torch import nn, optim

from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.metrics.clustering import *
from ignite.metrics.regression import *
from ignite.utils import *

# create default evaluator for doctests

def eval_step(engine, batch):
    return batch

default_evaluator = Engine(eval_step)

# create default optimizer for doctests

param_tensor = torch.zeros([1], requires_grad=True)
default_optimizer = torch.optim.SGD([param_tensor], lr=0.1)

# create default trainer for doctests
# as handlers could be attached to the trainer,
# each test must define his own trainer using `.. testsetup:`

def get_default_trainer():

    def train_step(engine, batch):
        return batch

    return Engine(train_step)

# create default model for doctests

default_model = nn.Sequential(OrderedDict([
    ('base', nn.Linear(4, 2)),
    ('fc', nn.Linear(2, 1))
]))

manual_seed(666)
default_trainer = get_default_trainer()

# Metric "loss" should decrease more than
# 0.1 of best loss after at most
# three iterations. Then best loss would get
# updated, otherwise lr is multiplied by 0.5

scheduler = ReduceLROnPlateauScheduler(
    default_optimizer, "loss",
    save_history=True, mode="min",
    factor=0.5, patience=3, threshold_mode='rel',
    threshold=0.1, trainer=default_trainer
)

metric_values = iter([10, 5, 3, 4, 4, 4, 5, 1])
default_evaluator.state.metrics = {"loss": None}

@default_trainer.on(Events.ITERATION_COMPLETED)
def set_metric_val():
    default_evaluator.state.metrics["loss"] = next(metric_values)

default_evaluator.add_event_handler(Events.COMPLETED, scheduler)

@default_trainer.on(Events.ITERATION_COMPLETED)
def trigger_eval():
    default_evaluator.run([0.])

default_trainer.run([0.] * 8)

print(default_trainer.state.param_history["lr"])
[[0.1], [0.1], [0.1], [0.1], [0.1], [0.1], [0.05], [0.05]]

New in version 0.4.9.

Methods

get_param

Method to get current parameter values

simulate_values

Method to simulate scheduled values during num_events events.

get_param()[source]#

Method to get current parameter values

Returns

list of params, or scalar param

Return type

Union[float, List[float]]

classmethod simulate_values(num_events, metric_values, init_lr, **scheduler_kwargs)[source]#

Method to simulate scheduled values during num_events events.

Parameters
Returns

event_index, value

Return type

List[List[int]]