Shortcuts

Function torch::autograd::grad

Function Documentation

variable_list torch::autograd::grad(const variable_list &outputs, const variable_list &inputs, const variable_list &grad_outputs = {}, c10::optional<bool> retain_graph = c10::nullopt, bool create_graph = false, bool allow_unused = false)

Computes and returns the sum of gradients of outputs with respect to the inputs.

grad_outputs should be a sequence of length matching output containing the “vector” in Jacobian-vector product, usually the pre-computed gradients w.r.t. each of the outputs. If an output doesn’t require_grad, then the gradient can be torch::Tensor()).

Parameters
  • outputs – outputs of the differentiated function.

  • inputs – Inputs w.r.t. which the gradient will be returned (and not accumulated into at::Tensor::grad).

  • grad_outputs – The “vector” in the Jacobian-vector product. Usually gradients w.r.t. each output. torch::Tensor() values can be specified for scalar Tensors or ones that don’t require grad. If a torch::Tensor() value would be acceptable for all grad_tensors, then this argument is optional. Default: {}.

  • retain_graph – If false, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to true is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.

  • create_graph – If true, graph of the derivative will be constructed, allowing to compute higher order derivative products. Default: false.

  • allow_unused – If false, specifying inputs that were not used when computing outputs (and therefore their grad is always zero) is an error. Defaults to false.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources