init_distributed¶
- torchtune.training.init_distributed(**kwargs: Dict[str, Any]) bool [source]¶
Initialize process group required for
torch.distributed
.- Parameters:
**kwargs (Dict[str, Any]) – Additional arguments to pass to torch.distributed.init_process_group.
- Returns:
True if torch.distributed is initialized.
- Return type:
- Raises:
RuntimeError – If torch.distributed is already initialized.