FastaiLRFinder#
- class ignite.handlers.lr_finder.FastaiLRFinder[source]#
Learning rate finder handler for supervised trainers.
While attached, the handler increases the learning rate in between two boundaries in a linear or exponential manner. It provides valuable information on how well the network can be trained over a range of learning rates and what can be an optimal learning rate.
Examples
from ignite.handlers import FastaiLRFinder trainer = ... model = ... optimizer = ... lr_finder = FastaiLRFinder() to_save = {"model": model, "optimizer": optimizer} with lr_finder.attach(trainer, to_save=to_save) as trainer_with_lr_finder: trainer_with_lr_finder.run(dataloader) # Get lr_finder results lr_finder.get_results() # Plot lr_finder results (requires matplotlib) lr_finder.plot() # get lr_finder suggestion for lr lr_finder.lr_suggestion()
Note
When context manager is exited all LR finder’s handlers are removed.
Note
Please, also keep in mind that all other handlers attached the trainer will be executed during LR finder’s run.
Note
This class may require matplotlib package to be installed to plot learning rate range test:
pip install matplotlib
References
Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186
fastai/lr_find: https://github.com/fastai/fastai
New in version 0.4.6.
Methods
Applying the suggested learning rate(s) on the given optimizer.
Attaches lr_finder to a given trainer.
- returns
Dictionary with loss and lr logs from the previous run
- returns
Learning rate at the minimum numerical gradient
Plots the learning rate range test.
- apply_suggested_lr(optimizer)[source]#
Applying the suggested learning rate(s) on the given optimizer.
- Parameters
optimizer (Optimizer) – the optimizer to apply the suggested learning rate(s) on.
- Return type
None
Note
The given optimizer must be the same as the one we before found the suggested learning rate for.
- attach(trainer, to_save, output_transform=<function FastaiLRFinder.<lambda>>, num_iter=None, start_lr=None, end_lr=10.0, step_mode='exp', smooth_f=0.05, diverge_th=5.0)[source]#
Attaches lr_finder to a given trainer. It also resets model and optimizer at the end of the run.
- Parameters
trainer (Engine) – lr_finder is attached to this trainer. Please, keep in mind that all attached handlers will be executed.
to_save (Mapping) – dictionary with optimizer and other objects that needs to be restored after running the LR finder. For example,
to_save={'optimizer': optimizer, 'model': model}
. It should contain “optimizer” key for the optimizer. Also all objects should implementstate_dict
andload_state_dict
methods.output_transform (Callable) – function that transforms the trainer’s
state.output
after each iteration. It must return the loss of that iteration.num_iter (Optional[int]) – number of iterations for lr schedule between base lr and end_lr. Default, it will run for
trainer.state.epoch_length * trainer.state.max_epochs
.start_lr (Optional[float]) – lower bound for lr search. Default, Learning Rate specified with the optimizer.
end_lr (float) – upper bound for lr search. Default, 10.0.
step_mode (str) – “exp” or “linear”, which way should the lr be increased from
start_lr
toend_lr
. Default, “exp”.smooth_f (float) – loss smoothing factor in range
[0, 1)
. Default, 0.05diverge_th (float) – Used for stopping the search when
current loss > diverge_th * best_loss
. Default, 5.0.
- Returns
trainer_with_lr_finder (trainer used for finding the lr)
- Return type
Examples
to_save = {"model": model, "optimizer": optimizer} with lr_finder.attach(trainer, to_save=to_save) as trainer_with_lr_finder: trainer_with_lr_finder.run(dataloader)
Note
lr_finder cannot be attached to more than one trainer at a time.
- lr_suggestion()[source]#
- Returns
Learning rate at the minimum numerical gradient (ignoring the increasing part of the curve)
- Return type
- plot(skip_start=10, skip_end=5, log_lr=True, display_suggestion=True, ax=None, **kwargs)[source]#
Plots the learning rate range test.
This method requires
matplotlib
package to be installed:pip install matplotlib
- Parameters
skip_start (int) – number of batches to trim from the start. Default: 10.
skip_end (int) – number of batches to trim from the start. Default: 5.
log_lr (bool) – True to plot the learning rate in a logarithmic scale; otherwise, plotted in a linear scale. Default: True.
display_suggestion (bool) – if True, red dot shows the suggested learning rate.
ax (Optional[Any]) – Pre-existing axes for the plot. Default: None.
kwargs (Any) – optional kwargs passed to
plt.subplots
ifax
is not provided.
- Return type
None
ax = lr_finder.plot(skip_end=0) ax.figure.savefig("output.jpg")