Shortcuts

torch.cuda.set_per_process_memory_fraction

torch.cuda.set_per_process_memory_fraction(fraction, device=None)[source]

Set memory fraction for a process.

The fraction is used to limit an caching allocator to allocated memory on a CUDA device. The allowed value equals the total visible memory multiplied fraction. If trying to allocate more than the allowed value in a process, will raise an out of memory error in allocator.

Parameters
  • fraction (float) – Range: 0~1. Allowed memory equals total_memory * fraction.

  • device (torch.device or int, optional) – selected device. If it is None the default CUDA device is used.

Note

In general, the total available free memory is less than the total capacity.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources