Shortcuts

ConcatScheduler#

class ignite.handlers.param_scheduler.ConcatScheduler(schedulers, durations, save_history=False)[source]#

Concat a list of parameter schedulers.

The ConcatScheduler goes through a list of schedulers given by schedulers. Duration of each scheduler is defined by durations list of integers.

Parameters
  • schedulers (List[ParamScheduler]) – list of parameter schedulers.

  • durations (List[int]) – list of number of events that lasts a parameter scheduler from schedulers.

  • save_history (bool) – whether to log the parameter values to engine.state.param_history, (default=False).

Examples

from collections import OrderedDict

import torch
from torch import nn, optim

from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.utils import *
from ignite.contrib.metrics.regression import *
from ignite.contrib.metrics import *

# create default evaluator for doctests

def eval_step(engine, batch):
    return batch

default_evaluator = Engine(eval_step)

# create default optimizer for doctests

param_tensor = torch.zeros([1], requires_grad=True)
default_optimizer = torch.optim.SGD([param_tensor], lr=0.1)

# create default trainer for doctests
# as handlers could be attached to the trainer,
# each test must define his own trainer using `.. testsetup:`

def get_default_trainer():

    def train_step(engine, batch):
        return batch

    return Engine(train_step)

# create default model for doctests

default_model = nn.Sequential(OrderedDict([
    ('base', nn.Linear(4, 2)),
    ('fc', nn.Linear(2, 1))
]))

manual_seed(666)
default_trainer = get_default_trainer()

scheduler_1 = LinearCyclicalScheduler(default_optimizer, "lr", 0.0, 1.0, 8)
scheduler_2 = CosineAnnealingScheduler(default_optimizer, "lr", 1.0, 0.2, 4)

# Sets the Learning rate linearly from 0.0 to 1.0 over 4 iterations. Then
# starts an annealing schedule from 1.0 to 0.2 over the next 4 iterations.
# The annealing cycles are repeated indefinitely.
combined_scheduler = ConcatScheduler(schedulers=[scheduler_1, scheduler_2], durations=[4, ])

default_trainer.add_event_handler(Events.ITERATION_STARTED, combined_scheduler)

@default_trainer.on(Events.ITERATION_COMPLETED)
def print_lr():
    print(default_optimizer.param_groups[0]["lr"])

default_trainer.run([0] * 8, max_epochs=1)
0.0
0.25
0.5
0.75
1.0
0.8828...
0.6000...
0.3171...

New in version 0.4.5.

Methods

get_param

Method to get current parameter values

load_state_dict

Copies parameters from state_dict into this ConcatScheduler.

simulate_values

Method to simulate scheduled values during num_events events.

state_dict

Returns a dictionary containing a whole state of ConcatScheduler.

get_param()[source]#

Method to get current parameter values

Returns

list of params, or scalar param

Return type

Union[List[float], float]

load_state_dict(state_dict)[source]#

Copies parameters from state_dict into this ConcatScheduler.

Parameters

state_dict (Mapping) – a dict containing parameters.

Return type

None

classmethod simulate_values(num_events, schedulers, durations, param_names=None)[source]#

Method to simulate scheduled values during num_events events.

Parameters
  • num_events (int) – number of events during the simulation.

  • schedulers (List[ParamScheduler]) – list of parameter schedulers.

  • durations (List[int]) – list of number of events that lasts a parameter scheduler from schedulers.

  • param_names (Optional[Union[List[str], Tuple[str]]]) – parameter name or list of parameter names to simulate values. By default, the first scheduler’s parameter name is taken.

Returns

list of [event_index, value_0, value_1, …], where values correspond to param_names.

Return type

list

state_dict()[source]#

Returns a dictionary containing a whole state of ConcatScheduler.

Returns

a dictionary containing a whole state of ConcatScheduler

Return type

dict