Typedef torch::NoGradGuard¶
Defined in File utils.h
Typedef Documentation¶
-
using torch::NoGradGuard = at::NoGradGuard¶
A RAII, thread-local guard that disabled gradient calculation.
Disabling gradient calculation is useful for inference, when you are sure that you will not call
at::Tensor::backward
. It will reduce memory consumption for computations that would otherwise haverequires_grad() == true
.In this mode, the result of every computation will have
requires_grad() == false
, even when the inputs haverequires_grad() == true
.This context manager is thread-local; it will not affect computation in other threads.
Example:
auto x = torch::tensor({1.}, torch::requires_grad()); { torch::NoGradGuard no_grad; auto y = x * 2; std::cout << y.requires_grad() << std::endl; // prints `false` } { auto doubler = [](torch::Tensor x) { torch::NoGradGuard no_grad; return x * 2; }; auto z = doubler(x); std::cout << z.requires_grad() << std::endl; // prints `false` }