# Forward-mode Automatic Differentiation (Beta)¶

This tutorial demonstrates how to use forward-mode AD to compute directional derivatives (or equivalently, Jacobian-vector products).

The tutorial below uses some APIs only available in versions >= 1.11 (or nightly builds).

Also note that forward-mode AD is currently in beta. The API is subject to change and operator coverage is still incomplete.

## Basic Usage¶

Unlike reverse-mode AD, forward-mode AD computes gradients eagerly alongside the forward pass. We can use forward-mode AD to compute a directional derivative by performing the forward pass as before, except we first associate our input with another tensor representing the direction of the directional derivative (or equivalently, the v in a Jacobian-vector product). When an input, which we call “primal”, is associated with a “direction” tensor, which we call “tangent”, the resultant new tensor object is called a “dual tensor” for its connection to dual numbers[0].

As the forward pass is performed, if any input tensors are dual tensors, extra computation is performed to propogate this “sensitivity” of the function.

import torch

primal = torch.randn(10, 10)
tangent = torch.randn(10, 10)

def fn(x, y):
return x ** 2 + y ** 2

# All forward AD computation must be performed in the context of
# a dual_level context. All dual tensors created in such a context
# will have their tangents destroyed upon exit. This is to ensure that
# if the output or intermediate results of this computation are reused
# in a future forward AD computation, their tangents (which are associated
# with this computation) won't be confused with tangents from the later
# computation.
# To create a dual tensor we associate a tensor, which we call the
# primal with another tensor of the same size, which we call the tangent.
# If the layout of the tangent is different from that of the primal,
# The values of the tangent are copied into a new tensor with the same
# metadata as the primal. Otherwise, the tangent itself is used as-is.
#
# It is also important to note that the dual tensor created by
# make_dual is a view of the primal.

# To demonstrate the case where the copy of the tangent happens,
# we pass in a tangent with a layout different from that of the primal

# Tensors that do not not have an associated tangent are automatically
# considered to have a zero-filled tangent of the same shape.
plain_tensor = torch.randn(10, 10)
dual_output = fn(dual_input, plain_tensor)

# Unpacking the dual returns a namedtuple with primal and tangent
# as attributes



## Usage with Modules¶

To use nn.Module with forward AD, replace the parameters of your model with dual tensors before performing the forward pass. At the time of writing, it is not possible to create dual tensor nn.Parameters. As a workaround, one must register the dual tensor as a non-parameter attribute of the module.

import torch.nn as nn

model = nn.Linear(5, 5)
input = torch.randn(16, 5)

params = {name: p for name, p in model.named_parameters()}
tangents = {name: torch.rand_like(p) for name, p in params.items()}

for name, p in params.items():
delattr(model, name)

out = model(input)


## Using Modules stateless API (experimental)¶

Another way to use nn.Module with forward AD is to utilize the stateless API. NB: At the time of writing the stateless API is still experimental and may be subject to change.

from torch.nn.utils._stateless import functional_call

# We need a fresh module because the functional call requires the
# the model to have parameters registered.
model = nn.Linear(5, 5)

dual_params = {}
for name, p in params.items():
# Using the same tangents from the above section
out = functional_call(model, dual_params, input)

# Check our results
assert torch.allclose(jvp, jvp2)


Custom Functions also support forward-mode AD. To create custom Function supporting forward-mode AD, register the jvp() static method. It is possible, but not mandatory for custom Functions to support both forward and backward AD. See the documentation for more information.

class Fn(torch.autograd.Function):
@staticmethod
def forward(ctx, foo):
result = torch.exp(foo)
# Tensors stored in ctx can be used in the subsequent forward grad
# computation.
ctx.result = result
return result

@staticmethod
def jvp(ctx, gI):
gO = gI * ctx.result
# If the tensor stored in ctx will not also be used in the backward pass,
# one can manually free it using del
del ctx.result
return gO

fn = Fn.apply

primal = torch.randn(10, 10, dtype=torch.double, requires_grad=True)
tangent = torch.randn(10, 10)

dual_output = fn(dual_input)

# It is important to use autograd.gradcheck to verify that your
# check_forward_ad=True to also check forward grads. If you did not
# check_backward_ad=False, check_undefined_grad=False, and
# check_batched_grad=False.