Shortcuts

auto_model#

ignite.distributed.auto.auto_model(model, sync_bn=False, **kwargs)[source]#

Helper method to adapt provided model for non-distributed and distributed configurations (supporting all available backends from available_backends()).

Internally, we perform to following:

  • send model to current device() if model’s parameters are not on the device.

  • wrap the model to torch DistributedDataParallel for native torch distributed if world size is larger than 1.

  • wrap the model to torch DataParallel if no distributed context found and more than one CUDA devices available.

  • broadcast the initial variable states from rank 0 to all other processes if Horovod distributed framework is used.

Examples:

import ignite.distribted as idist

model = idist.auto_model(model)

In addition with NVidia/Apex, it can be used in the following way:

import ignite.distribted as idist

model, optimizer = amp.initialize(model, optimizer, opt_level=opt_level)
model = idist.auto_model(model)
Parameters
  • model (Module) – model to adapt.

  • sync_bn (bool) – if True, applies torch convert_sync_batchnorm to the model for native torch distributed only. Default, False. Note, if using Nvidia/Apex, batchnorm conversion should be applied before calling amp.initialize.

  • kwargs (Any) – kwargs to model’s wrapping class: torch DistributedDataParallel or torch DataParallel if applicable. Please, make sure to use acceptable kwargs for given backend.

Returns

torch.nn.Module

Return type

Module

Changed in version 0.4.2:

  • Added Horovod distributed framework.

  • Added sync_bn argument.

Changed in version 0.4.3: Added kwargs to idist.auto_model.