Shortcuts

torch.accelerator.synchronize

torch.accelerator.synchronize(device=None, /)[source][source]

Wait for all kernels in all streams on the given device to complete.

Parameters

device (torch.device, str, int, optional) – device for which to synchronize. It must match the current accelerator device type. If not given, use torch.accelerator.current_device_index() by default.

Note

This function is a no-op if the current accelerator is not initialized.

Example:

>>> assert torch.accelerator.is_available() "No available accelerators detected."
>>> start_event = torch.Event(enable_timing=True)
>>> end_event = torch.Event(enable_timing=True)
>>> start_event.record()
>>> tensor = torch.randn(100, device=torch.accelerator.current_accelerator())
>>> sum = torch.sum(tensor)
>>> end_event.record()
>>> torch.accelerator.synchronize()
>>> elapsed_time_ms = start_event.elapsed_time(end_event)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources