torch.accelerator.synchronize¶
- torch.accelerator.synchronize(device=None, /)[source]¶
Wait for all kernels in all streams on the given device to complete.
- Parameters
device (
torch.device
, str, int, optional) – device for which to synchronize. It must match the current accelerator device type. If not given, usetorch.accelerator.current_device_idx()
by default.
Note
This function is a no-op if the current accelerator is not initialized.
Example:
>>> assert torch.accelerator.is_available() "No available accelerators detected." >>> start_event = torch.Event(enable_timing=True) >>> end_event = torch.Event(enable_timing=True) >>> start_event.record() >>> tensor = torch.randn(100, device=torch.accelerator.current_accelerator()) >>> sum = torch.sum(tensor) >>> end_event.record() >>> torch.accelerator.synchronize() >>> elapsed_time_ms = start_event.elapsed_time(end_event)