Shortcuts

torch.cuda.reset_max_memory_allocated

torch.cuda.reset_max_memory_allocated(device=None)[source]

Reset the starting point in tracking maximum GPU memory occupied by tensors for a given device.

See max_memory_allocated() for details.

Parameters

device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default).

Warning

This function now calls reset_peak_memory_stats(), which resets /all/ peak memory stats.

Note

See Memory management for more details about GPU memory management.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources