- ignite.distributed.auto.auto_model(model, sync_bn=False, **kwargs)#
Helper method to adapt provided model for non-distributed and distributed configurations (supporting all available backends from
Internally, we perform to following:
send model to current
device()if model’s parameters are not on the device.
wrap the model to torch DistributedDataParallel for native torch distributed if world size is larger than 1.
wrap the model to torch DataParallel if no distributed context found and more than one CUDA devices available.
broadcast the initial variable states from rank 0 to all other processes if Horovod distributed framework is used.
model (torch.nn.modules.module.Module) – model to adapt.
sync_bn (bool) – if True, applies torch convert_sync_batchnorm to the model for native torch distributed only. Default, False. Note, if using Nvidia/Apex, batchnorm conversion should be applied before calling
- Return type
import ignite.distribted as idist model = idist.auto_model(model)
In addition with NVidia/Apex, it can be used in the following way:
import ignite.distribted as idist model, optimizer = amp.initialize(model, optimizer, opt_level=opt_level) model = idist.auto_model(model)
Changed in version 0.4.2:
Added Horovod distributed framework.
Changed in version 0.4.3: Added kwargs to