Shortcuts

auto_model#

ignite.distributed.auto.auto_model(model, sync_bn=False, **kwargs)[source]#

Helper method to adapt provided model for non-distributed and distributed configurations (supporting all available backends from available_backends()).

Internally, we perform to following:

  • send model to current device() if model’s parameters are not on the device.

  • wrap the model to torch DistributedDataParallel for native torch distributed if world size is larger than 1.

  • wrap the model to torch DataParallel if no distributed context found and more than one CUDA devices available.

  • broadcast the initial variable states from rank 0 to all other processes if Horovod distributed framework is used.

Parameters
Returns

torch.nn.Module

Return type

torch.nn.modules.module.Module

Examples

import ignite.distribted as idist

model = idist.auto_model(model)

In addition with NVidia/Apex, it can be used in the following way:

import ignite.distribted as idist

model, optimizer = amp.initialize(model, optimizer, opt_level=opt_level)
model = idist.auto_model(model)

Changed in version 0.4.2:

  • Added Horovod distributed framework.

  • Added sync_bn argument.

Changed in version 0.4.3: Added kwargs to idist.auto_model.