Shortcuts

torch.backends

torch.backends controls the behavior of various backends that PyTorch supports.

These backends include:

  • torch.backends.cuda

  • torch.backends.cudnn

  • torch.backends.mkl

  • torch.backends.mkldnn

  • torch.backends.openmp

torch.backends.cuda

torch.backends.cuda.is_built()[source]

Returns whether PyTorch is built with CUDA support. Note that this doesn’t necessarily mean CUDA is available; just that if this PyTorch binary were run a machine with working CUDA drivers and devices, we would be able to use it.

torch.backends.cuda.matmul.allow_tf32

A bool that controls whether TensorFloat-32 tensor cores may be used in matrix multiplications on Ampere or newer GPUs. See TensorFloat-32(TF32) on Ampere devices.

torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction

A bool that controls whether reduced precision reductions (e.g., with fp16 accumulation type) are allowed with fp16 GEMMs.

torch.backends.cuda.cufft_plan_cache

cufft_plan_cache caches the cuFFT plans

size

A readonly int that shows the number of plans currently in the cuFFT plan cache.

torch.backends.cuda.max_size

A int that controls cache capacity of cuFFT plan.

torch.backends.cuda.clear()

Clears the cuFFT plan cache.

torch.backends.cuda.preferred_linalg_library(backend=None)[source]

Warning

This flag is experimental and subject to change.

When PyTorch runs a CUDA linear algebra operation it often uses the cuSOLVER or MAGMA libraries, and if both are available it decides which to use with a heuristic. This flag (a str) allows overriding those heuristics.

  • If “cusolver” is set then cuSOLVER will be used wherever possible.

  • If “magma” is set then MAGMA will be used wherever possible.

  • If “default” (the default) is set then heuristics will be used to pick between cuSOLVER and MAGMA if both are available.

  • When no input is given, this function returns the currently preferred library.

Note: When a library is preferred other libraries may still be used if the preferred library doesn’t implement the operation(s) called. This flag may achieve better performance if PyTorch’s heuristic library selection is incorrect for your application’s inputs.

Currently supported linalg operators:

torch.backends.cudnn

torch.backends.cudnn.version()[source]

Returns the version of cuDNN

torch.backends.cudnn.is_available()[source]

Returns a bool indicating if CUDNN is currently available.

torch.backends.cudnn.enabled

A bool that controls whether cuDNN is enabled.

torch.backends.cudnn.allow_tf32

A bool that controls where TensorFloat-32 tensor cores may be used in cuDNN convolutions on Ampere or newer GPUs. See TensorFloat-32(TF32) on Ampere devices.

torch.backends.cudnn.deterministic

A bool that, if True, causes cuDNN to only use deterministic convolution algorithms. See also torch.are_deterministic_algorithms_enabled() and torch.use_deterministic_algorithms().

torch.backends.cudnn.benchmark

A bool that, if True, causes cuDNN to benchmark multiple convolution algorithms and select the fastest.

torch.backends.mps

torch.backends.mps.is_available()[source]

Returns a bool indicating if MPS is currently available.

torch.backends.mps.is_built()[source]

Returns whether PyTorch is built with MPS support. Note that this doesn’t necessarily mean MPS is available; just that if this PyTorch binary were run a machine with working MPS drivers and devices, we would be able to use it.

torch.backends.mkl

torch.backends.mkl.is_available()[source]

Returns whether PyTorch is built with MKL support.

torch.backends.mkldnn

torch.backends.mkldnn.is_available()[source]

Returns whether PyTorch is built with MKL-DNN support.

torch.backends.openmp

torch.backends.openmp.is_available()[source]

Returns whether PyTorch is built with OpenMP support.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources