get_distributed_backend
- torchtune.training.get_distributed_backend(device_type: str, offload_ops_to_cpu: bool = False) str [source]
Gets the PyTorch Distributed backend based on device type.
- Parameters:
Example
>>> get_distributed_backend("cuda") 'nccl' >>> get_distributed_backend("cpu") 'gloo' >>> get_distributed_backend("cuda", offload_ops_to_cpu=True) 'cuda:nccl,cpu:gloo'
- Returns:
Distributed backend for use in
torch.distributed.init_process_group
.- Return type: