class torch.no_grad[source]

Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True.

In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True.

This context manager is thread local; it will not affect computation in other threads.

Also functions as a decorator. (Make sure to instantiate with parenthesis.)

Note

Note

This API does not apply to forward-mode AD. If you want to disable forward AD for a computation, you can unpack your dual tensors.

Example:

>>> x = torch.tensor([1.], requires_grad=True)