Automatic differentiation package  torch.autograd¶
torch.autograd
provides classes and functions implementing automatic
differentiation of arbitrary scalar valued functions. It requires minimal
changes to the existing code  you only need to declare Tensor
s
for which gradients should be computed with the requires_grad=True
keyword.

torch.autograd.
backward
(tensors, grad_tensors=None, retain_graph=None, create_graph=False, grad_variables=None)[source]¶ Computes the sum of gradients of given tensors w.r.t. graph leaves.
The graph is differentiated using the chain rule. If any of
tensors
are nonscalar (i.e. their data has more than one element) and require gradient, the function additionally requires specifyinggrad_tensors
. It should be a sequence of matching length, that contains gradient of the differentiated function w.r.t. corresponding tensors (None
is an acceptable value for all tensors that don’t need gradient tensors).This function accumulates gradients in the leaves  you might need to zero them before calling it.
Parameters:  tensors (sequence of Tensor) – Tensors of which the derivative will be computed.
 grad_tensors (sequence of (Tensor or None)) – Gradients w.r.t. each element of corresponding tensors. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable for all grad_tensors, then this argument is optional.
 retain_graph (bool, optional) – If
False
, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option toTrue
is not needed and often can be worked around in a much more efficient way. Defaults to the value ofcreate_graph
.  create_graph (bool, optional) – If
True
, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults toFalse
.

torch.autograd.
grad
(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False)[source]¶ Computes and returns the sum of gradients of outputs w.r.t. the inputs.
grad_outputs
should be a sequence of length matchingoutput
containing the precomputed gradients w.r.t. each of the outputs. If an output doesn’t require_grad, then the gradient can beNone
).If
only_inputs
isTrue
, the function will only return a list of gradients w.r.t the specified inputs. If it’sFalse
, then gradient w.r.t. all remaining leaves will still be computed, and will be accumulated into their.grad
attribute.Parameters:  outputs (sequence of Tensor) – outputs of the differentiated function.
 inputs (sequence of Tensor) – Inputs w.r.t. which the gradient will be
returned (and not accumulated into
.grad
).  grad_outputs (sequence of Tensor) – Gradients w.r.t. each output. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable for all grad_tensors, then this argument is optional. Default: None.
 retain_graph (bool, optional) – If
False
, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option toTrue
is not needed and often can be worked around in a much more efficient way. Defaults to the value ofcreate_graph
.  create_graph (bool, optional) – If
True
, graph of the derivative will be constructed, allowing to compute higher order derivative products. Default:False
.  allow_unused (bool, optional) – If
False
, specifying inputs that were not used when computing outputs (and therefore their grad is always zero) is an error. Defaults toFalse
.
Locally disabling gradient computation¶

class
torch.autograd.
no_grad
[source]¶ Contextmanager that disabled gradient calculation.
Disabling gradient calculation is useful for inference, when you are sure that you will not call
Tensor.backward()
. It will reduce memory consumption for computations that would otherwise have requires_grad=True. In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True.Also functions as a decorator.
Example:
>>> x = torch.tensor([1], requires_grad=True) >>> with torch.no_grad(): ... y = x * 2 >>> y.requires_grad False >>> @torch.no_grad() ... def doubler(x): ... return x * 2 >>> z = doubler(x) >>> z.requires_grad False

class
torch.autograd.
enable_grad
[source]¶ Contextmanager that enables gradient calculation.
Enables gradient calculation inside a
no_grad
context. This has no effect outside ofno_grad
.Also functions as a decorator.
Example:
>>> x = torch.tensor([1], requires_grad=True) >>> with torch.no_grad(): ... with torch.enable_grad(): ... y = x * 2 >>> y.requires_grad True >>> y.backward() >>> x.grad >>> @torch.enable_grad() ... def doubler(x): ... return x * 2 >>> with torch.no_grad(): ... z = doubler(x) >>> z.requires_grad True

class
torch.autograd.
set_grad_enabled
(mode)[source]¶ Contextmanager that sets gradient calculation to on or off.
set_grad_enabled
will enable or disable grads based on its argumentmode
. It can be used as a contextmanager or as a function.Parameters: mode (bool) – Flag whether to enable grad ( True
), or disable (False
). This can be used to conditionally enable gradients.Example:
>>> x = torch.tensor([1], requires_grad=True) >>> is_train = False >>> with torch.set_grad_enabled(is_train): ... y = x * 2 >>> y.requires_grad False >>> torch.set_grad_enabled(True) >>> y = x * 2 >>> y.requires_grad True >>> torch.set_grad_enabled(False) >>> y = x * 2 >>> y.requires_grad False
Inplace operations on Tensors¶
Supporting inplace operations in autograd is a hard matter, and we discourage their use in most cases. Autograd’s aggressive buffer freeing and reuse makes it very efficient and there are very few occasions when inplace operations actually lower memory usage by any significant amount. Unless you’re operating under heavy memory pressure, you might never need to use them.
Inplace correctness checks¶
All Tensor
s keep track of inplace operations applied to them, and
if the implementation detects that a tensor was saved for backward in one of
the functions, but it was modified inplace afterwards, an error will be raised
once backward pass is started. This ensures that if you’re using inplace
functions and not seeing any errors, you can be sure that the computed
gradients are correct.
Variable (deprecated)¶
Warning
The Variable API has been deprecated: Variables are no longer necessary to
use autograd with tensors. Autograd automatically supports Tensors with
requires_grad
set to True
. Below please find a quick guide on what
has changed:
Variable(tensor)
andVariable(tensor, requires_grad)
still work as expected, but they return Tensors instead of Variables.var.data
is the same thing astensor.data
. Methods such as
var.backward(), var.detach(), var.register_hook()
now work on tensors with the same method names.
In addition, one can now create tensors with requires_grad=True
using factory
methods such as torch.randn()
, torch.zeros()
, torch.ones()
, and others
like the following:
autograd_tensor = torch.randn((2, 3, 4), requires_grad=True)
Tensor autograd functions¶

class
torch.
Tensor
¶ 
backward
(gradient=None, retain_graph=None, create_graph=False)[source]¶ Computes the gradient of current tensor w.r.t. graph leaves.
The graph is differentiated using the chain rule. If the tensor is nonscalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying
gradient
. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t.self
.This function accumulates gradients in the leaves  you might need to zero them before calling it.
Parameters:  gradient (Tensor or None) – Gradient w.r.t. the
tensor. If it is a tensor, it will be automatically converted
to a Tensor that does not require grad unless
create_graph
is True. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable then this argument is optional.  retain_graph (bool, optional) – If
False
, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value ofcreate_graph
.  create_graph (bool, optional) – If
True
, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults toFalse
.
 gradient (Tensor or None) – Gradient w.r.t. the
tensor. If it is a tensor, it will be automatically converted
to a Tensor that does not require grad unless

detach
()¶ Returns a new Tensor, detached from the current graph.
The result will never require gradient.
Note
Returned Tensor uses the same data tensor as the original one. Inplace modifications on either of them will be seen, and may trigger errors in correctness checks.

detach_
()¶ Detaches the Tensor from the graph that created it, making it a leaf. Views cannot be detached inplace.

grad
¶ This attribute is
None
by default and becomes a Tensor the first time a call tobackward()
computes gradients forself
. The attribute will then contain the gradients computed and future calls tobackward()
will accumulate (add) gradients into it.

is_leaf
¶ All Tensors that have
requires_grad
which isFalse
will be leaf Tensors by convention.For Tensors that have
requires_grad
which isTrue
, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and sograd_fn
is None.Only leaf Tensors will have their
grad
populated during a call tobackward()
. To getgrad
populated for nonleaf Tensors, you can useretain_grad()
.Example:
>>> a = torch.rand(10, requires_grad=True) >>> a.is_leaf True >>> b = torch.rand(10, requires_grad=True).cuda() >>> b.is_leaf False # b was created by the operation that cast a cpu Tensor into a cuda Tensor >>> c = torch.rand(10, requires_grad=True) + 2 >>> c.is_leaf False # c was created by the addition operation >>> d = torch.rand(10).cuda() >>> d.is_leaf True # d does not require gradients and so has no operation creating it (that is tracked by the autograd engine) >>> e = torch.rand(10).cuda().requires_grad_() >>> e.is_leaf True # e requires gradients and has no operations creating it >>> f = torch.rand(10, requires_grad=True, device="cuda") >>> f.is_leaf True # f requires grad, has not operation creating it

register_hook
(hook)[source]¶ Registers a backward hook.
The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature:
hook(grad) > Tensor or None
The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of
grad
.This function returns a handle with a method
handle.remove()
that removes the hook from the module.Example:
>>> v = torch.tensor([0., 0., 0.], requires_grad=True) >>> h = v.register_hook(lambda grad: grad * 2) # double the gradient >>> v.backward(torch.tensor([1., 2., 3.])) >>> v.grad 2 4 6 [torch.FloatTensor of size (3,)] >>> h.remove() # removes the hook

requires_grad
¶ Is
True
if gradients need to be computed for this Tensor,False
otherwise.

Function¶

class
torch.autograd.
Function
[source]¶ Records operation history and defines formulas for differentiating ops.
Every operation performed on
Tensor
s creates a new function object, that performs the computation, and records that it happened. The history is retained in the form of a DAG of functions, with edges denoting data dependencies (input < output
). Then, when backward is called, the graph is processed in the topological ordering, by callingbackward()
methods of eachFunction
object, and passing returned gradients on to nextFunction
s.Normally, the only way users interact with functions is by creating subclasses and defining new operations. This is a recommended way of extending torch.autograd.
Each function object is meant to be used only once (in the forward pass).
Examples:
>>> class Exp(Function): >>> >>> @staticmethod >>> def forward(ctx, i): >>> result = i.exp() >>> ctx.save_for_backward(result) >>> return result >>> >>> @staticmethod >>> def backward(ctx, grad_output): >>> result, = ctx.saved_tensors >>> return grad_output * result

static
backward
(ctx, *grad_outputs)[source]¶ Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.

static
forward
(ctx, *args, **kwargs)[source]¶ Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.

static
Numerical gradient checking¶

torch.autograd.
gradcheck
(func, inputs, eps=1e06, atol=1e05, rtol=0.001, raise_exception=True, check_sparse_nnz=False)[source]¶ Check gradients computed via small finite differences against analytical gradients w.r.t. tensors in
inputs
that are of floating point type and withrequires_grad=True
.The check between numerical and analytical gradients uses
allclose()
.Note
The default values are designed for
input
of double precision. This check will likely fail ifinput
is of less precision, e.g.,FloatTensor
.Warning
If any checked tensor in
input
has overlapping memory, i.e., different indices pointing to the same memory address (e.g., fromtorch.expand()
), this check will likely fail because the numerical gradients computed by point perturbation at such indices will change values at all other indices that share the same memory address.Parameters:  func (function) – a Python function that takes Tensor inputs and returns a Tensor or a tuple of Tensors
 inputs (tuple of Tensor or Tensor) – inputs to the function
 eps (float, optional) – perturbation for finite differences
 atol (float, optional) – absolute tolerance
 rtol (float, optional) – relative tolerance
 raise_exception (bool, optional) – indicating whether to raise an exception if the check fails. The exception gives more information about the exact nature of the failure. This is helpful when debugging gradchecks.
 check_sparse_nnz (bool, optional) – if True, gradcheck allows for SparseTensor input, and for any SparseTensor at input, gradcheck will perform check at nnz positions only.
Returns: True if all differences satisfy allclose condition

torch.autograd.
gradgradcheck
(func, inputs, grad_outputs=None, eps=1e06, atol=1e05, rtol=0.001, gen_non_contig_grad_outputs=False, raise_exception=True)[source]¶ Check gradients of gradients computed via small finite differences against analytical gradients w.r.t. tensors in
inputs
andgrad_outputs
that are of floating point type and withrequires_grad=True
.This function checks that backpropagating through the gradients computed to the given
grad_outputs
are correct.The check between numerical and analytical gradients uses
allclose()
.Note
The default values are designed for
input
andgrad_outputs
of double precision. This check will likely fail if they are of less precision, e.g.,FloatTensor
.Warning
If any checked tensor in
input
andgrad_outputs
has overlapping memory, i.e., different indices pointing to the same memory address (e.g., fromtorch.expand()
), this check will likely fail because the numerical gradients computed by point perturbation at such indices will change values at all other indices that share the same memory address.Parameters:  func (function) – a Python function that takes Tensor inputs and returns a Tensor or a tuple of Tensors
 inputs (tuple of Tensor or Tensor) – inputs to the function
 grad_outputs (tuple of Tensor or Tensor, optional) – The gradients with respect to the function’s outputs.
 eps (float, optional) – perturbation for finite differences
 atol (float, optional) – absolute tolerance
 rtol (float, optional) – relative tolerance
 gen_non_contig_grad_outputs (bool, optional) – if
grad_outputs
isNone
andgen_non_contig_grad_outputs
isTrue
, the randomly generated gradient outputs are made to be noncontiguous  raise_exception (bool, optional) – indicating whether to raise an exception if the check fails. The exception gives more information about the exact nature of the failure. This is helpful when debugging gradchecks.
Returns: True if all differences satisfy allclose condition
Profiler¶
Autograd includes a profiler that lets you inspect the cost of different
operators inside your model  both on the CPU and GPU. There are two modes
implemented at the moment  CPUonly using profile
.
and nvprof based (registers both CPU and GPU activity) using
emit_nvtx
.

class
torch.autograd.profiler.
profile
(enabled=True, use_cuda=False)[source]¶ Context manager that manages autograd profiler state and holds a summary of results.
Parameters: Example
>>> x = torch.randn((1, 1), requires_grad=True) >>> with torch.autograd.profiler.profile() as prof: ... y = x ** 2 ... y.backward() >>> # NOTE: some columns were removed for brevity ... print(prof)    Name CPU time CUDA time    PowConstant 142.036us 0.000us N5torch8autograd9GraphRootE 63.524us 0.000us PowConstantBackward 184.228us 0.000us MulConstant 50.288us 0.000us PowConstant 28.439us 0.000us Mul 20.154us 0.000us N5torch8autograd14AccumulateGradE 13.790us 0.000us N5torch8autograd5CloneE 4.088us 0.000us

export_chrome_trace
(path)[source]¶ Exports an EventList as a Chrome tracing tools file.
The checkpoint can be later loaded and inspected under
chrome://tracing
URL.Parameters: path (str) – Path where the trace will be written.

key_averages
()[source]¶ Averages all function events over their keys.
Returns: An EventList containing FunctionEventAvg objects.

table
(sort_by=None)[source]¶ Prints an EventList as a nicely formatted table.
Parameters: sort_by (str, optional) – Attribute used to sort entries. By default they are printed in the same order as they were registered. Valid keys include: cpu_time
,cuda_time
,cpu_time_total
,cuda_time_total
,count
.Returns: A string containing the table.


class
torch.autograd.profiler.
emit_nvtx
(enabled=True)[source]¶ Context manager that makes every autograd operation emit an NVTX range.
It is useful when running the program under nvprof:
nvprof profilefromstart off o trace_name.prof  <regular command here>
Unfortunately, there’s no way to force nvprof to flush the data it collected to disk, so for CUDA profiling one has to use this context manager to annotate nvprof traces and wait for the process to exit before inspecting them. Then, either NVIDIA Visual Profiler (nvvp) can be used to visualize the timeline, or
torch.autograd.profiler.load_nvprof()
can load the results for inspection e.g. in Python REPL.Parameters: enabled (bool, optional) – Setting this to False makes this context manager a noop. Default: True
.Example
>>> with torch.cuda.profiler.profile(): ... model(x) # Warmup CUDA memory allocator and profiler ... with torch.autograd.profiler.emit_nvtx(): ... model(x)
Forwardbackward correlation
When viewing a profile created using
emit_nvtx
in the Nvidia Visual Profiler, correlating each backwardpass op with the corresponding forwardpass op can be difficult. To ease this task,emit_nvtx
appends sequence number information to the ranges it generates.During the forward pass, each function range is decorated with
seq=<N>
.seq
is a running counter, incremented each time a new backward Function object is created and stashed for backward. Thus, the seq=<N> annotation associated with each forward function range tells you that if a backward Function object is created by this forward function, the backward object will receive sequence number N. During the backward pass, the toplevel range wrapping each C++ backward Function’sapply()
call is decorated withstashed seq=<M>
.M
is the sequence number that the backward object was created with. By comparingstashed seq
numbers in backward withseq
numbers in forward, you can track down which forward op created each backward Function.Any functions executed during the backward pass are also decorated with
seq=<N>
. During default backward (withcreate_graph=False
) this information is irrelevant, and in fact,N
may simply be 0 for all such functions. Only the toplevel ranges associated with backward Function objects’apply()
methods are useful, as a way to correlate these Function objects with the earlier forward pass.Doublebackward
If, on the other hand, a backward pass with
create_graph=True
is underway (in other words, if you are setting up for a doublebackward), each function’s execution during backward is given a nonzero, usefulseq=<N>
. Those functions may themselves create Function objects to be executed later during doublebackward, just as the original functions in the forward pass did. The relationship between backward and doublebackward is conceptually the same as the relationship between forward and backward: The functions still emit currentsequencenumbertagged ranges, the Function objects they create still stash those sequence numbers, and during the eventual doublebackward, the Function objects’apply()
ranges are still tagged withstashed seq
numbers, which can be compared to seq numbers from the backward pass.
Anomaly detection¶

class
torch.autograd.
detect_anomaly
[source]¶ Contextmanager that enable anomaly detection for the autograd engine.
This does two things:  Running the forward pass with detection enabled will allow the backward pass to print the traceback of the forward operation that created the failing backward function.  Any backward computation that generate “nan” value will raise an error.
Example
>>> import torch >>> from torch import autograd >>> class MyFunc(autograd.Function): ... @staticmethod ... def forward(ctx, inp): ... return inp.clone() ... @staticmethod ... def backward(ctx, gO): ... # Error during the backward pass ... raise RuntimeError("Some error in backward") ... return gO.clone() >>> def run_fn(a): ... out = MyFunc.apply(a) ... return out.sum() >>> inp = torch.rand(10, 10, requires_grad=True) >>> out = run_fn(inp) >>> out.backward() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/your/pytorch/install/torch/tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/your/pytorch/install/torch/autograd/__init__.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag File "/your/pytorch/install/torch/autograd/function.py", line 76, in apply return self._forward_cls.backward(self, *args) File "<stdin>", line 8, in backward RuntimeError: Some error in backward >>> with autograd.detect_anomaly(): ... inp = torch.rand(10, 10, requires_grad=True) ... out = run_fn(inp) ... out.backward() Traceback of forward call that caused the error: File "tmp.py", line 53, in <module> out = run_fn(inp) File "tmp.py", line 44, in run_fn out = MyFunc.apply(a) Traceback (most recent call last): File "<stdin>", line 4, in <module> File "/your/pytorch/install/torch/tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/your/pytorch/install/torch/autograd/__init__.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag File "/your/pytorch/install/torch/autograd/function.py", line 76, in apply return self._forward_cls.backward(self, *args) File "<stdin>", line 8, in backward RuntimeError: Some error in backward

class
torch.autograd.
set_detect_anomaly
(mode)[source]¶ Contextmanager that sets the anomaly detection for the autograd engine on or off.
set_detect_anomaly
will enable or disable the autograd anomaly detection based on its argumentmode
. It can be used as a contextmanager or as a function.See
detect_anomaly
above for details of the anomaly detection behaviour.Parameters: mode (bool) – Flag whether to enable anomaly detection ( True
), or disable (False
).