- ignite.distributed.auto.auto_dataloader(dataset, **kwargs)#
Helper method to create a dataloader adapted for non-distributed and distributed configurations (supporting all available backends from
Internally, we create a dataloader with provided kwargs while applying the following updates:
batch size is scaled by world size:
batch_size / world_sizeif larger or equal world size.
number of workers is scaled by number of local processes:
num_workers / nprocsif larger or equal world size.
if no sampler provided by user, a torch DistributedSampler is setup.
if a torch DistributedSampler is provided by user, it is used without wrapping it.
if another sampler is provided, it is wrapped by
if the default device is ‘cuda’, pin_memory is automatically set to True.
Custom batch sampler is not adapted for distributed configuration. Please, make sure that provided batch sampler is compatible with distributed configuration.
- Return type
import ignite.distribted as idist train_loader = idist.auto_dataloader( train_dataset, batch_size=32, num_workers=4, shuffle=True, pin_memory="cuda" in idist.device().type, drop_last=True, )