Shortcuts

Typedef torch::NoGradGuard

Typedef Documentation

using torch::NoGradGuard = at::NoGradGuard

A RAII, thread-local guard that disabled gradient calculation.

Disabling gradient calculation is useful for inference, when you are sure that you will not call at::Tensor::backward. It will reduce memory consumption for computations that would otherwise have requires_grad() == true.

In this mode, the result of every computation will have requires_grad() == false, even when the inputs have requires_grad() == true.

This context manager is thread-local; it will not affect computation in other threads.

Example:

auto x = torch::tensor({1.}, torch::requires_grad());
{
  torch::NoGradGuard no_grad;
  auto y = x * 2;
  std::cout << y.requires_grad() << std::endl; // prints `false`
}
{
  auto doubler = [](torch::Tensor x) {
    torch::NoGradGuard no_grad;
    return x * 2;
  };
  auto z = doubler(x);
  std::cout << z.requires_grad() << std::endl; // prints `false`
}

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources