Shortcuts

no_grad

class torch.autograd.no_grad[source]

Context-manager that disabled gradient calculation.

Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True.

In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True.

This context manager is thread local; it will not affect computation in other threads.

Also functions as a decorator. (Make sure to instantiate with parenthesis.)

Note

No-grad is one of several mechanisms that can enable or disable gradients locally see Locally disabling gradient computation for more information on how they compare.

Note

This API does not apply to forward-mode AD. If you want to disable forward AD for a computation, you can unpack your dual tensors.

Example:

>>> x = torch.tensor([1.], requires_grad=True)
>>> with torch.no_grad():
...   y = x * 2
>>> y.requires_grad
False
>>> @torch.no_grad()
... def doubler(x):
...     return x * 2
>>> z = doubler(x)
>>> z.requires_grad
False

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources