Shortcuts

torch.cuda.memory_summary

torch.cuda.memory_summary(device=None, abbreviated=False)[source]

Returns a human-readable printout of the current memory allocator statistics for a given device.

This can be useful to display periodically during training, or when handling out-of-memory exceptions.

Parameters:
  • device (torch.device or int, optional) – selected device. Returns printout for the current device, given by current_device(), if device is None (default).

  • abbreviated (bool, optional) – whether to return an abbreviated summary (default: False).

Return type:

str

Note

See Memory management for more details about GPU memory management.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources