Shortcuts

inference_mode

class torch.inference_mode(mode=True)[source]

Context-manager that enables or disables inference mode

InferenceMode is a new context manager analogous to no_grad to be used when you are certain your operations will have no interactions with autograd (e.g., model training). Code run under this mode gets better performance by disabling view tracking and version counter bumps.

This context manager is thread local; it will not affect computation in other threads.

Also functions as a decorator. (Make sure to instantiate with parenthesis.)

Note

Inference mode is one of several mechanisms that can enable or disable gradients locally see Locally disabling gradient computation for more information on how they compare.

Parameters

mode (bool) – Flag whether to enable or disable inference mode

Example::
>>> import torch
>>> x = torch.ones(1, 2, 3, requires_grad=True)
>>> with torch.inference_mode():
...   y = x * x
>>> y.requires_grad
False
>>> y._version
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Inference tensors do not track version counter.
>>> @torch.inference_mode()
... def func(x):
...   return x * x
>>> out = func(x)
>>> out.requires_grad
False

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources