• Docs >
  • Utils >
  • torchtnt.utils.distributed.all_gather_tensors
Shortcuts

torchtnt.utils.distributed.all_gather_tensors

torchtnt.utils.distributed.all_gather_tensors(result: Tensor, group: Optional[ProcessGroup] = None) List[Tensor]

Function to gather tensors from several distributed processes onto a list that is broadcasted to all processes. Works on tensors that have the same number of dimensions, but where each dimension may differ. In this case tensors are padded, gathered and then trimmed to secure equal workload for all processes.

Parameters:
  • result – the value to sync
  • group – the process group to gather results from. Defaults to all processes (world)
Returns:

list with size equal to the process group where

gathered_result[i] corresponds to result tensor from process i

Return type:

gathered_result

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources