Shortcuts

torch.cuda.caching_allocator_alloc

torch.cuda.caching_allocator_alloc(size, device=None, stream=None)[source]

Performs a memory allocation using the CUDA memory allocator.

Memory is allocated for a given device and a stream, this function is intended to be used for interoperability with other frameworks. Allocated memory is released through caching_allocator_delete().

Parameters:
  • size (int) – number of bytes to be allocated.

  • device (torch.device or int, optional) – selected device. If it is None the default CUDA device is used.

  • stream (torch.cuda.Stream or int, optional) – selected stream. If is None then the default stream for the selected device is used.

Note

See Memory management for more details about GPU memory management.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources