MemPool¶
- class torch.cuda.MemPool(*args, **kwargs)[source][source]¶
MemPool represents a pool of memory in a caching allocator. Currently, it’s just the ID of the pool object maintained in the CUDACachingAllocator.
- Parameters
allocator (torch._C._cuda_CUDAAllocator, optional) – a torch._C._cuda_CUDAAllocator object that can be used to define how memory gets allocated in the pool. If
allocator
isNone
(default), memory allocation follows the default/ current configuration of the CUDACachingAllocator.
- property allocator: Optional[_cuda_CUDAAllocator]¶
Returns the allocator this MemPool routes allocations to.
- snapshot()[source][source]¶
Return a snapshot of the CUDA memory allocator pool state across all devices.
Interpreting the output of this function requires familiarity with the memory allocator internals.
Note
See Memory management for more details about GPU memory management.