Shortcuts

torch.cuda.synchronize

torch.cuda.synchronize(device=None)[source][source]

Wait for all kernels in all streams on a CUDA device to complete.

Parameters

device (torch.device or int, optional) – device for which to synchronize. It uses the current device, given by current_device(), if device is None (default).

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources