Shortcuts

auto_optim#

ignite.distributed.auto.auto_optim(optimizer, **kwargs)[source]#

Helper method to adapt optimizer for non-distributed and distributed configurations (supporting all available backends from available_backends()).

Internally, this method is no-op for non-distributed and torch native distributed configuration.

For XLA distributed configuration, we create a new class that inherits from provided optimizer. The goal is to override the step() method with specific xm.optimizer_step implementation.

For Horovod distributed configuration, optimizer is wrapped with Horovod Distributed Optimizer and its state is broadcasted from rank 0 to all other processes.

Parameters
  • optimizer (Optimizer) – input torch optimizer

  • kwargs (Any) – kwargs to Horovod backend’s DistributedOptimizer.

Returns

Optimizer

Return type

Optimizer

Examples

import ignite.distributed as idist

optimizer = idist.auto_optim(optimizer)

Changed in version 0.4.2: Added Horovod distributed optimizer.

Changed in version 0.4.7: Added kwargs to idist.auto_optim.