• Docs >
  • Utils >
  • torchtnt.utils.prepare_module.prepare_ddp
Shortcuts

torchtnt.utils.prepare_module.prepare_ddp

torchtnt.utils.prepare_module.prepare_ddp(module: Module, device: device, strategy: Optional[DDPStrategy] = None) DistributedDataParallel

Utility to move a module to device and wrap in DistributedDataParallel.

Parameters:
  • module – module to be wrapped in DDP
  • device – device to which module will be moved
  • strategy – an instance of DDPStrategy which defines the settings of DDP APIs
Examples::
strategy = DDPStrategy(find_unused_parameters=True, gradient_as_bucket_view=True) module = nn.Linear(1, 1) device = torch.device(“cuda”) ddp_module = prepare_ddp(module, device, strategy)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources