Shortcuts

init_distributed

torchtune.training.init_distributed(**kwargs: Dict[str, Any]) bool[source]

Initialize process group required for torch.distributed.

Parameters:

**kwargs (Dict[str, Any]) – Additional arguments to pass to torch.distributed.init_process_group.

Returns:

True if torch.distributed is initialized.

Return type:

bool

Raises:

RuntimeError – If torch.distributed is already initialized.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources