Shortcuts

torchtnt.utils.distributed.barrier

torchtnt.utils.distributed.barrier() None

Add a synchronization point across all processes when using distributed. If torch.distributed is initialized, this function will invoke a barrier across the global process group. For more granular process group wrapping, please refer to PGWrapper.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources