Shortcuts

torch.cuda.get_allocator_backend

torch.cuda.get_allocator_backend()[source]

Return a string describing the active allocator backend as set by PYTORCH_CUDA_ALLOC_CONF. Currently available backends are native (PyTorch’s native caching allocator) and cudaMallocAsync` (CUDA’s built-in asynchronous allocator).

Note

See Memory management for details on choosing the allocator backend.

Return type

str

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources