• Docs >
  • Utils >
  • torchtnt.utils.prepare_module.prepare_fsdp
Shortcuts

torchtnt.utils.prepare_module.prepare_fsdp

torchtnt.utils.prepare_module.prepare_fsdp(module: Module, device: device, strategy: Optional[FSDPStrategy] = None) FullyShardedDataParallel

Utility to move a module to device and wrap in FullyShardedDataParallel.

Parameters:
  • module – module to be wrapped in FSDP
  • device – device to which module will be moved
  • strategy – an instance of FSDPStrategy which defines the settings of FSDP APIs
Examples::
strategy = FSDPStrategy(limit_all_gathers=True) module = nn.Linear(1, 1) device = torch.device(“cuda”) fsdp_module = prepare_fsdp(module, device, strategy)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources