Shortcuts

torch.xpu.max_memory_allocated

torch.xpu.max_memory_allocated(device=None)[source]

Return the maximum GPU memory occupied by tensors in bytes for a given device.

By default, this returns the peak allocated memory since the beginning of this program. reset_peak_memory_stats() can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak allocated memory usage of each iteration in a training loop.

Parameters

device (torch.device or int or str, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default).

Return type

int

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources