torch.nn¶
Parameters¶

class
torch.nn.
Parameter
[source]¶ A kind of Variable that is to be considered a module parameter.
Parameters are
Variable
subclasses, that have a very special property when used withModule
s  when they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. inparameters()
iterator. Assigning a Variable doesn’t have such effect. This is because one might want to cache some temporary state, like last hidden state of the RNN, in the model. If there was no such class asParameter
, these temporaries would get registered too.Another difference is that parameters can’t be volatile and that they require gradient by default.
Parameters:  data (Tensor) – parameter tensor.
 requires_grad (bool, optional) – if the parameter requires gradient. See Excluding subgraphs from backward for more details.
Containers¶
Module¶

class
torch.nn.
Module
[source]¶ Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call .cuda(), etc.

add_module
(name, module)[source]¶ Adds a child module to the current module.
The module can be accessed as an attribute using the given name.
Parameters:

apply
(fn)[source]¶ Applies
fn
recursively to every submodule (as returned by.children()
) as well as self. Typical use includes initializing the parameters of a model (see also torchnninit).Parameters: fn ( Module
> None) – function to be applied to each submoduleReturns: self Return type: Module Example
>>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.data.fill_(1.0) >>> print(m.weight) >>> >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear (2 > 2) Parameter containing: 1 1 1 1 [torch.FloatTensor of size 2x2] Linear (2 > 2) Parameter containing: 1 1 1 1 [torch.FloatTensor of size 2x2] Sequential ( (0): Linear (2 > 2) (1): Linear (2 > 2) )

children
()[source]¶ Returns an iterator over immediate children modules.
Yields: Module – a child module

cuda
(device=None)[source]¶ Moves all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
Parameters: device (int, optional) – if specified, all parameters will be copied to that device Returns: self Return type: Module

double
()[source]¶ Casts all parameters and buffers to double datatype.
Returns: self Return type: Module

eval
()[source]¶ Sets the module in evaluation mode.
This has any effect only on modules such as Dropout or BatchNorm.

float
()[source]¶ Casts all parameters and buffers to float datatype.
Returns: self Return type: Module

forward
(*input)[source]¶ Defines the computation performed at every call.
Should be overriden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

load_state_dict
(state_dict, strict=True)[source]¶ Copies parameters and buffers from
state_dict
into this module and its descendants. Ifstrict
isTrue
then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.Parameters:  state_dict (dict) – A dict containing parameters and persistent buffers.
 strict (bool) – Strictly enforce that the keys in
state_dict
match the keys returned by this module’s :func:`state_dict() function.

modules
()[source]¶ Returns an iterator over all modules in the network.
Yields: Module – a module in the network Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): >>> print(idx, '>', m) 0 > Sequential ( (0): Linear (2 > 2) (1): Linear (2 > 2) ) 1 > Linear (2 > 2)

named_children
()[source]¶ Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
Yields: (string, Module) – Tuple containing a name and child module Example
>>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module)

named_modules
(memo=None, prefix='')[source]¶ Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
Yields: (string, Module) – Tuple of name and module Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): >>> print(idx, '>', m) 0 > ('', Sequential ( (0): Linear (2 > 2) (1): Linear (2 > 2) )) 1 > ('0', Linear (2 > 2))

named_parameters
(memo=None, prefix='')[source]¶ Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself
Yields: (string, Parameter) – Tuple containing the name and parameter Example
>>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())

parameters
()[source]¶ Returns an iterator over module parameters.
This is typically passed to an optimizer.
Yields: Parameter – module parameter Example
>>> for param in model.parameters(): >>> print(type(param.data), param.size()) <class 'torch.FloatTensor'> (20L,) <class 'torch.FloatTensor'> (20L, 1L, 5L, 5L)

register_backward_hook
(hook)[source]¶ Registers a backward hook on the module.
The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) > Tensor or None
The
grad_input
andgrad_output
may be tuples if the module has multiple inputs or outputs. The hook should not modify its arguments, but it can optionally return a new gradient with respect to input that will be used in place ofgrad_input
in subsequent computations.Returns: a handle that can be used to remove the added hook by calling handle.remove()
Return type: torch.utils.hooks.RemovableHandle

register_buffer
(name, tensor)[source]¶ Adds a persistent buffer to the module.
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s
running_mean
is not a parameter, but is part of the persistent state.Buffers can be accessed as attributes using given names.
Parameters: Example
>>> self.register_buffer('running_mean', torch.zeros(num_features))

register_forward_hook
(hook)[source]¶ Registers a forward hook on the module.
The hook will be called every time after
forward()
has computed an output. It should have the following signature:hook(module, input, output) > None
The hook should not modify the input or output.
Returns: a handle that can be used to remove the added hook by calling handle.remove()
Return type: torch.utils.hooks.RemovableHandle

register_forward_pre_hook
(hook)[source]¶ Registers a forward prehook on the module.
The hook will be called every time before
forward()
is invoked. It should have the following signature:hook(module, input) > None
The hook should not modify the input.
Returns: a handle that can be used to remove the added hook by calling handle.remove()
Return type: torch.utils.hooks.RemovableHandle

register_parameter
(name, param)[source]¶ Adds a parameter to the module.
The parameter can be accessed as an attribute using given name.
Parameters:

state_dict
(destination=None, prefix='', keep_vars=False)[source]¶ Returns a dictionary containing a whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names.
When keep_vars is
True
, it returns a Variable for each parameter (rather than a Tensor).Parameters:  destination (dict, optional) – if not None, the return dictionary is stored into destination. Default: None
 prefix (string, optional) – Adds a prefix to the key (name) of every parameter and buffer in the result dictionary. Default: ‘’
 keep_vars (bool, optional) – if
True
, returns a Variable for each parameter. IfFalse
, returns a Tensor for each parameter. Default:False
Returns: a dictionary containing a whole state of the module
Return type: Example
>>> module.state_dict().keys() ['bias', 'weight']

train
(mode=True)[source]¶ Sets the module in training mode.
This has any effect only on modules such as Dropout or BatchNorm.
Returns: self Return type: Module

Sequential¶

class
torch.nn.
Sequential
(*args)[source]¶ A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in.
To make it easier to understand, given is a small example:
# Example of using Sequential model = nn.Sequential( nn.Conv2d(1,20,5), nn.ReLU(), nn.Conv2d(20,64,5), nn.ReLU() ) # Example of using Sequential with OrderedDict model = nn.Sequential(OrderedDict([ ('conv1', nn.Conv2d(1,20,5)), ('relu1', nn.ReLU()), ('conv2', nn.Conv2d(20,64,5)), ('relu2', nn.ReLU()) ]))
ModuleList¶

class
torch.nn.
ModuleList
(modules=None)[source]¶ Holds submodules in a list.
ModuleList can be indexed like a regular Python list, but modules it contains are properly registered, and will be visible by all Module methods.
Parameters: modules (list, optional) – a list of modules to add Example:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.linears = nn.ModuleList([nn.Linear(10, 10) for i in range(10)]) def forward(self, x): # ModuleList can act as an iterable, or be indexed using ints for i, l in enumerate(self.linears): x = self.linears[i // 2](x) + l(x) return x
ParameterList¶

class
torch.nn.
ParameterList
(parameters=None)[source]¶ Holds parameters in a list.
ParameterList can be indexed like a regular Python list, but parameters it contains are properly registered, and will be visible by all Module methods.
Parameters: modules (list, optional) – a list of Parameter`
to addExample:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.params = nn.ParameterList([nn.Parameter(torch.randn(10, 10)) for i in range(10)]) def forward(self, x): # ModuleList can act as an iterable, or be indexed using ints for i, p in enumerate(self.params): x = self.params[i // 2].mm(x) + p.mm(x) return x

append
(parameter)[source]¶ Appends a given parameter at the end of the list.
Parameters: parameter (nn.Parameter) – parameter to append

Convolution Layers¶
Conv1d¶

class
torch.nn.
Conv1d
(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]¶ Applies a 1D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C_{in}, L)\) and output \((N, C_{out}, L_{out})\) can be precisely described as:
\[\begin{array}{ll} out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{{k}=0}^{C_{in}1} weight(C_{out_j}, k) \star input(N_i, k) \end{array}\]where \(\star\) is the valid crosscorrelation operator, \(N\) is a batch size, \(C\) denotes a number of channels, \(L\) is a length of signal sequence.
stride
controls the stride for the crosscorrelation, a single number or a oneelement tuple.padding
controls the amount of implicit zeropaddings on bothsides forpadding
number of points.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups.At groups=1, all inputs are convolved to all outputs.At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size out_channels // in_channels).Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid crosscorrelation, and not a full crosscorrelation. It is up to the user to add proper padding.
Note
The configuration when groups == in_channels and out_channels = K * in_channels where K is a positive integer is termed in literature as depthwise convolution.
In other words, for an input of size \((N, C_{in}, L_{in})\), if you want a depthwise convolution with a depthwise multiplier K, then you use the constructor arguments \((in\_channels=C_{in}, out\_channels=C_{in} * K, ..., groups=C_{in})\)
Parameters:  in_channels (int) – Number of channels in the input image
 out_channels (int) – Number of channels produced by the convolution
 kernel_size (int or tuple) – Size of the convolving kernel
 stride (int or tuple, optional) – Stride of the convolution. Default: 1
 padding (int or tuple, optional) – Zeropadding added to both sides of the input. Default: 0
 dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
 groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
 bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
 Shape:
 Input: \((N, C_{in}, L_{in})\)
 Output: \((N, C_{out}, L_{out})\) where \(L_{out} = floor((L_{in} + 2 * padding  dilation * (kernel\_size  1)  1) / stride + 1)\)
Variables: Examples:
>>> m = nn.Conv1d(16, 33, 3, stride=2) >>> input = autograd.Variable(torch.randn(20, 16, 50)) >>> output = m(input)
Conv2d¶

class
torch.nn.
Conv2d
(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]¶ Applies a 2D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C_{in}, H, W)\) and output \((N, C_{out}, H_{out}, W_{out})\) can be precisely described as:
\[\begin{array}{ll} out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{{k}=0}^{C_{in}1} weight(C_{out_j}, k) \star input(N_i, k) \end{array}\]where \(\star\) is the valid 2D crosscorrelation operator, \(N\) is a batch size, \(C\) denotes a number of channels, \(H\) is a height of input planes in pixels, and \(W\) is width in pixels.
stride
controls the stride for the crosscorrelation, a single number or a tuple.padding
controls the amount of implicit zeropaddings on bothsides forpadding
number of points for each dimension.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups.At groups=1, all inputs are convolved to all outputs.At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size out_channels // in_channels).The parameters
kernel_size
,stride
,padding
,dilation
can either be: a single
int
– in which case the same value is used for the height and width dimension  a
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid crosscorrelation, and not a full crosscorrelation. It is up to the user to add proper padding.
Note
The configuration when groups == in_channels and out_channels = K * in_channels where K is a positive integer is termed in literature as depthwise convolution.
In other words, for an input of size \((N, C_{in}, H_{in}, W_{in})\), if you want a depthwise convolution with a depthwise multiplier K, then you use the constructor arguments \((in\_channels=C_{in}, out\_channels=C_{in} * K, ..., groups=C_{in})\)
Parameters:  in_channels (int) – Number of channels in the input image
 out_channels (int) – Number of channels produced by the convolution
 kernel_size (int or tuple) – Size of the convolving kernel
 stride (int or tuple, optional) – Stride of the convolution. Default: 1
 padding (int or tuple, optional) – Zeropadding added to both sides of the input. Default: 0
 dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
 groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
 bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
 Shape:
 Input: \((N, C_{in}, H_{in}, W_{in})\)
 Output: \((N, C_{out}, H_{out}, W_{out})\) where \(H_{out} = floor((H_{in} + 2 * padding[0]  dilation[0] * (kernel\_size[0]  1)  1) / stride[0] + 1)\) \(W_{out} = floor((W_{in} + 2 * padding[1]  dilation[1] * (kernel\_size[1]  1)  1) / stride[1] + 1)\)
Variables: Examples:
>>> # With square kernels and equal stride >>> m = nn.Conv2d(16, 33, 3, stride=2) >>> # nonsquare kernels and unequal stride and with padding >>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >>> # nonsquare kernels and unequal stride and with padding and dilation >>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1)) >>> input = autograd.Variable(torch.randn(20, 16, 50, 100)) >>> output = m(input)
 a single
Conv3d¶

class
torch.nn.
Conv3d
(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]¶ Applies a 3D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C_{in}, D, H, W)\) and output \((N, C_{out}, D_{out}, H_{out}, W_{out})\) can be precisely described as:
\[\begin{array}{ll} out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{{k}=0}^{C_{in}1} weight(C_{out_j}, k) \star input(N_i, k) \end{array}\]where \(\star\) is the valid 3D crosscorrelation operator
stride
controls the stride for the crosscorrelation.padding
controls the amount of implicit zeropaddings on bothsides forpadding
number of points for each dimension.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups.At groups=1, all inputs are convolved to all outputs.At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size out_channels // in_channels).The parameters
kernel_size
,stride
,padding
,dilation
can either be: a single
int
– in which case the same value is used for the depth, height and width dimension  a
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid crosscorrelation, and not a full crosscorrelation. It is up to the user to add proper padding.
Note
The configuration when groups == in_channels and out_channels = K * in_channels where K is a positive integer is termed in literature as depthwise convolution.
In other words, for an input of size \((N, C_{in}, D_{in}, H_{in}, W_{in})\), if you want a depthwise convolution with a depthwise multiplier K, then you use the constructor arguments \((in\_channels=C_{in}, out\_channels=C_{in} * K, ..., groups=C_{in})\)
Parameters:  in_channels (int) – Number of channels in the input image
 out_channels (int) – Number of channels produced by the convolution
 kernel_size (int or tuple) – Size of the convolving kernel
 stride (int or tuple, optional) – Stride of the convolution. Default: 1
 padding (int or tuple, optional) – Zeropadding added to all three sides of the input. Default: 0
 dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
 groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
 bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
 Shape:
 Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)
 Output: \((N, C_{out}, D_{out}, H_{out}, W_{out})\) where \(D_{out} = floor((D_{in} + 2 * padding[0]  dilation[0] * (kernel\_size[0]  1)  1) / stride[0] + 1)\) \(H_{out} = floor((H_{in} + 2 * padding[1]  dilation[1] * (kernel\_size[1]  1)  1) / stride[1] + 1)\) \(W_{out} = floor((W_{in} + 2 * padding[2]  dilation[2] * (kernel\_size[2]  1)  1) / stride[2] + 1)\)
Variables: Examples:
>>> # With square kernels and equal stride >>> m = nn.Conv3d(16, 33, 3, stride=2) >>> # nonsquare kernels and unequal stride and with padding >>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0)) >>> input = autograd.Variable(torch.randn(20, 16, 10, 50, 100)) >>> output = m(input)
 a single
ConvTranspose1d¶

class
torch.nn.
ConvTranspose1d
(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[source]¶ Applies a 1D transposed convolution operator over an input image composed of several input planes.
This module can be seen as the gradient of Conv1d with respect to its input. It is also known as a fractionallystrided convolution or a deconvolution (although it is not an actual deconvolution operation).
stride
controls the stride for the crosscorrelation.padding
controls the amount of implicit zeropaddings on bothsides forpadding
number of points.output_padding
controls the amount of implicit zeropaddings onboth sides of the output foroutput_padding
number of points.number of points.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups.At groups=1, all inputs are convolved to all outputs.At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size out_channels // in_channels).Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid crosscorrelation, and not a full crosscorrelation. It is up to the user to add proper padding.
Parameters:  in_channels (int) – Number of channels in the input image
 out_channels (int) – Number of channels produced by the convolution
 kernel_size (int or tuple) – Size of the convolving kernel
 stride (int or tuple, optional) – Stride of the convolution. Default: 1
 padding (int or tuple, optional) – Zeropadding added to both sides of the input. Default: 0
 output_padding (int or tuple, optional) – Zeropadding added to one side of the output. Default: 0
 groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
 bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
 dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
 Shape:
 Input: \((N, C_{in}, L_{in})\)
 Output: \((N, C_{out}, L_{out})\) where \(L_{out} = (L_{in}  1) * stride  2 * padding + kernel\_size + output\_padding\)
Variables:
ConvTranspose2d¶

class
torch.nn.
ConvTranspose2d
(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[source]¶ Applies a 2D transposed convolution operator over an input image composed of several input planes.
This module can be seen as the gradient of Conv2d with respect to its input. It is also known as a fractionallystrided convolution or a deconvolution (although it is not an actual deconvolution operation).
stride
controls the stride for the crosscorrelation.padding
controls the amount of implicit zeropaddings on bothsides forpadding
number of points for each dimension.output_padding
controls the amount of implicit zeropaddings onboth sides of the output foroutput_padding
number of points foreach dimension.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups.At groups=1, all inputs are convolved to all outputs.At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size out_channels // in_channels).The parameters
kernel_size
,stride
,padding
,output_padding
can either be: a single
int
– in which case the same value is used for the height and width dimensions  a
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid crosscorrelation, and not a full crosscorrelation. It is up to the user to add proper padding.
Parameters:  in_channels (int) – Number of channels in the input image
 out_channels (int) – Number of channels produced by the convolution
 kernel_size (int or tuple) – Size of the convolving kernel
 stride (int or tuple, optional) – Stride of the convolution. Default: 1
 padding (int or tuple, optional) – Zeropadding added to both sides of the input. Default: 0
 output_padding (int or tuple, optional) – Zeropadding added to one side of the output. Default: 0
 groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
 bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
 dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
 Shape:
 Input: \((N, C_{in}, H_{in}, W_{in})\)
 Output: \((N, C_{out}, H_{out}, W_{out})\) where \(H_{out} = (H_{in}  1) * stride[0]  2 * padding[0] + kernel\_size[0] + output\_padding[0]\) \(W_{out} = (W_{in}  1) * stride[1]  2 * padding[1] + kernel\_size[1] + output\_padding[1]\)
Variables: Examples:
>>> # With square kernels and equal stride >>> m = nn.ConvTranspose2d(16, 33, 3, stride=2) >>> # nonsquare kernels and unequal stride and with padding >>> m = nn.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >>> input = autograd.Variable(torch.randn(20, 16, 50, 100)) >>> output = m(input) >>> # exact output size can be also specified as an argument >>> input = autograd.Variable(torch.randn(1, 16, 12, 12)) >>> downsample = nn.Conv2d(16, 16, 3, stride=2, padding=1) >>> upsample = nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1) >>> h = downsample(input) >>> h.size() torch.Size([1, 16, 6, 6]) >>> output = upsample(h, output_size=input.size()) >>> output.size() torch.Size([1, 16, 12, 12])
 a single
ConvTranspose3d¶

class
torch.nn.
ConvTranspose3d
(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[source]¶ Applies a 3D transposed convolution operator over an input image composed of several input planes. The transposed convolution operator multiplies each input value elementwise by a learnable kernel, and sums over the outputs from all input feature planes.
This module can be seen as the gradient of Conv3d with respect to its input. It is also known as a fractionallystrided convolution or a deconvolution (although it is not an actual deconvolution operation).
stride
controls the stride for the crosscorrelation.padding
controls the amount of implicit zeropaddings on bothsides forpadding
number of points for each dimension.output_padding
controls the amount of implicit zeropaddings onboth sides of the output foroutput_padding
number of points foreach dimension.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups.At groups=1, all inputs are convolved to all outputs.At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size out_channels // in_channels).The parameters
kernel_size
,stride
,padding
,output_padding
can either be: a single
int
– in which case the same value is used for the depth, height and width dimensions  a
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid crosscorrelation, and not a full crosscorrelation. It is up to the user to add proper padding.
Parameters:  in_channels (int) – Number of channels in the input image
 out_channels (int) – Number of channels produced by the convolution
 kernel_size (int or tuple) – Size of the convolving kernel
 stride (int or tuple, optional) – Stride of the convolution. Default: 1
 padding (int or tuple, optional) – Zeropadding added to all three sides of the input. Default: 0
 output_padding (int or tuple, optional) – Zeropadding added to one side of the output. Default: 0
 groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
 bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
 dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
 Shape:
 Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)
 Output: \((N, C_{out}, D_{out}, H_{out}, W_{out})\) where \(D_{out} = (D_{in}  1) * stride[0]  2 * padding[0] + kernel\_size[0] + output\_padding[0]\) \(H_{out} = (H_{in}  1) * stride[1]  2 * padding[1] + kernel\_size[1] + output\_padding[1]\) \(W_{out} = (W_{in}  1) * stride[2]  2 * padding[2] + kernel\_size[2] + output\_padding[2]\)
Variables: Examples:
>>> # With square kernels and equal stride >>> m = nn.ConvTranspose3d(16, 33, 3, stride=2) >>> # nonsquare kernels and unequal stride and with padding >>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(0, 4, 2)) >>> input = autograd.Variable(torch.randn(20, 16, 10, 50, 100)) >>> output = m(input)
 a single
Pooling Layers¶
MaxPool1d¶

class
torch.nn.
MaxPool1d
(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)[source]¶ Applies a 1D max pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, L)\) and output \((N, C, L_{out})\) can be precisely described as:
\[\begin{array}{ll} out(N_i, C_j, k) = \max_{{m}=0}^{{kernel\_size}1} input(N_i, C_j, stride * k + m) \end{array}\]Ifpadding
is nonzero, then the input is implicitly zeropadded on both sides forpadding
number of pointsdilation
controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of whatdilation
does.Parameters:  kernel_size – the size of the window to take a max over
 stride – the stride of the window. Default value is
kernel_size
 padding – implicit zero padding to be added on both sides
 dilation – a parameter that controls the stride of elements in the window
 return_indices – if
True
, will return the max indices along with the outputs. Useful when Unpooling later  ceil_mode – when True, will use ceil instead of floor to compute the output shape
 Shape:
 Input: \((N, C, L_{in})\)
 Output: \((N, C, L_{out})\) where \(L_{out} = floor((L_{in} + 2 * padding  dilation * (kernel\_size  1)  1) / stride + 1)\)
Examples:
>>> # pool of size=3, stride=2 >>> m = nn.MaxPool1d(3, stride=2) >>> input = autograd.Variable(torch.randn(20, 16, 50)) >>> output = m(input)
MaxPool2d¶

class
torch.nn.
MaxPool2d
(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)[source]¶ Applies a 2D max pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, H, W)\), output \((N, C, H_{out}, W_{out})\) and
kernel_size
\((kH, kW)\) can be precisely described as:\[\begin{array}{ll} out(N_i, C_j, h, w) = \max_{{m}=0}^{kH1} \max_{{n}=0}^{kW1} input(N_i, C_j, stride[0] * h + m, stride[1] * w + n) \end{array}\]Ifpadding
is nonzero, then the input is implicitly zeropadded on both sides forpadding
number of pointsdilation
controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of whatdilation
does.The parameters
kernel_size
,stride
,padding
,dilation
can either be: a single
int
– in which case the same value is used for the height and width dimension  a
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Parameters:  kernel_size – the size of the window to take a max over
 stride – the stride of the window. Default value is
kernel_size
 padding – implicit zero padding to be added on both sides
 dilation – a parameter that controls the stride of elements in the window
 return_indices – if
True
, will return the max indices along with the outputs. Useful when Unpooling later  ceil_mode – when True, will use ceil instead of floor to compute the output shape
 Shape:
 Input: \((N, C, H_{in}, W_{in})\)
 Output: \((N, C, H_{out}, W_{out})\) where \(H_{out} = floor((H_{in} + 2 * padding[0]  dilation[0] * (kernel\_size[0]  1)  1) / stride[0] + 1)\) \(W_{out} = floor((W_{in} + 2 * padding[1]  dilation[1] * (kernel\_size[1]  1)  1) / stride[1] + 1)\)
Examples:
>>> # pool of square window of size=3, stride=2 >>> m = nn.MaxPool2d(3, stride=2) >>> # pool of nonsquare window >>> m = nn.MaxPool2d((3, 2), stride=(2, 1)) >>> input = autograd.Variable(torch.randn(20, 16, 50, 32)) >>> output = m(input)
 a single
MaxPool3d¶

class
torch.nn.
MaxPool3d
(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)[source]¶ Applies a 3D max pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, D, H, W)\), output \((N, C, D_{out}, H_{out}, W_{out})\) and
kernel_size
\((kD, kH, kW)\) can be precisely described as:\[\begin{array}{ll} out(N_i, C_j, d, h, w) = \max_{{k}=0}^{kD1} \max_{{m}=0}^{kH1} \max_{{n}=0}^{kW1} input(N_i, C_j, stride[0] * k + d, stride[1] * h + m, stride[2] * w + n) \end{array}\]Ifpadding
is nonzero, then the input is implicitly zeropadded on both sides forpadding
number of pointsdilation
controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of whatdilation
does.The parameters
kernel_size
,stride
,padding
,dilation
can either be: a single
int
– in which case the same value is used for the depth, height and width dimension  a
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
Parameters:  kernel_size – the size of the window to take a max over
 stride – the stride of the window. Default value is
kernel_size
 padding – implicit zero padding to be added on all three sides
 dilation – a parameter that controls the stride of elements in the window
 return_indices – if
True
, will return the max indices along with the outputs. Useful when Unpooling later  ceil_mode – when True, will use ceil instead of floor to compute the output shape
 Shape:
 Input: \((N, C, D_{in}, H_{in}, W_{in})\)
 Output: \((N, C, D_{out}, H_{out}, W_{out})\) where \(D_{out} = floor((D_{in} + 2 * padding[0]  dilation[0] * (kernel\_size[0]  1)  1) / stride[0] + 1)\) \(H_{out} = floor((H_{in} + 2 * padding[1]  dilation[1] * (kernel\_size[1]  1)  1) / stride[1] + 1)\) \(W_{out} = floor((W_{in} + 2 * padding[2]  dilation[2] * (kernel\_size[2]  1)  1) / stride[2] + 1)\)
Examples:
>>> # pool of square window of size=3, stride=2 >>> m = nn.MaxPool3d(3, stride=2) >>> # pool of nonsquare window >>> m = nn.MaxPool3d((3, 2, 2), stride=(2, 1, 2)) >>> input = autograd.Variable(torch.randn(20, 16, 50,44, 31)) >>> output = m(input)
 a single
MaxUnpool1d¶

class
torch.nn.
MaxUnpool1d
(kernel_size, stride=None, padding=0)[source]¶ Computes a partial inverse of
MaxPool1d
.MaxPool1d
is not fully invertible, since the nonmaximal values are lost.MaxUnpool1d
takes in as input the output ofMaxPool1d
including the indices of the maximal values and computes a partial inverse in which all nonmaximal values are set to zero.Note
MaxPool1d can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument output_size in the forward call. See the Inputs and Example below.
Parameters:  Inputs:
 input: the input Tensor to invert
 indices: the indices given out by MaxPool1d
 output_size (optional) : a torch.Size that specifies the targeted output size
 Shape:
 Input: \((N, C, H_{in})\)
 Output: \((N, C, H_{out})\) where
\(H_{out} = (H_{in}  1) * stride[0]  2 * padding[0] + kernel\_size[0]\)
or as given by
output_size
in the call operator
Example:
>>> pool = nn.MaxPool1d(2, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool1d(2, stride=2) >>> input = Variable(torch.Tensor([[[1, 2, 3, 4, 5, 6, 7, 8]]])) >>> output, indices = pool(input) >>> unpool(output, indices) Variable containing: (0 ,.,.) = 0 2 0 4 0 6 0 8 [torch.FloatTensor of size 1x1x8] >>> # Example showcasing the use of output_size >>> input = Variable(torch.Tensor([[[1, 2, 3, 4, 5, 6, 7, 8, 9]]])) >>> output, indices = pool(input) >>> unpool(output, indices, output_size=input.size()) Variable containing: (0 ,.,.) = 0 2 0 4 0 6 0 8 0 [torch.FloatTensor of size 1x1x9] >>> unpool(output, indices) Variable containing: (0 ,.,.) = 0 2 0 4 0 6 0 8 [torch.FloatTensor of size 1x1x8]
MaxUnpool2d¶

class
torch.nn.
MaxUnpool2d
(kernel_size, stride=None, padding=0)[source]¶ Computes a partial inverse of
MaxPool2d
.MaxPool2d
is not fully invertible, since the nonmaximal values are lost.MaxUnpool2d
takes in as input the output ofMaxPool2d
including the indices of the maximal values and computes a partial inverse in which all nonmaximal values are set to zero.Note
MaxPool2d can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument output_size in the forward call. See the Inputs and Example below.
Parameters:  Inputs:
 input: the input Tensor to invert
 indices: the indices given out by MaxPool2d
 output_size (optional) : a torch.Size that specifies the targeted output size
 Shape:
 Input: \((N, C, H_{in}, W_{in})\)
 Output: \((N, C, H_{out}, W_{out})\) where
\(H_{out} = (H_{in}  1) * stride[0] 2 * padding[0] + kernel\_size[0]\)
\(W_{out} = (W_{in}  1) * stride[1] 2 * padding[1] + kernel\_size[1]\)
or as given by
output_size
in the call operator
Example:
>>> pool = nn.MaxPool2d(2, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool2d(2, stride=2) >>> input = Variable(torch.Tensor([[[[ 1, 2, 3, 4], ... [ 5, 6, 7, 8], ... [ 9, 10, 11, 12], ... [13, 14, 15, 16]]]])) >>> output, indices = pool(input) >>> unpool(output, indices) Variable containing: (0 ,0 ,.,.) = 0 0 0 0 0 6 0 8 0 0 0 0 0 14 0 16 [torch.FloatTensor of size 1x1x4x4] >>> # specify a different output size than input size >>> unpool(output, indices, output_size=torch.Size([1, 1, 5, 5])) Variable containing: (0 ,0 ,.,.) = 0 0 0 0 0 6 0 8 0 0 0 0 0 14 0 16 0 0 0 0 0 0 0 0 0 [torch.FloatTensor of size 1x1x5x5]
MaxUnpool3d¶

class
torch.nn.
MaxUnpool3d
(kernel_size, stride=None, padding=0)[source]¶ Computes a partial inverse of
MaxPool3d
.MaxPool3d
is not fully invertible, since the nonmaximal values are lost.MaxUnpool3d
takes in as input the output ofMaxPool3d
including the indices of the maximal values and computes a partial inverse in which all nonmaximal values are set to zero.Note
MaxPool3d can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument output_size in the forward call. See the Inputs section below.
Parameters:  Inputs:
 input: the input Tensor to invert
 indices: the indices given out by MaxPool3d
 output_size (optional) : a torch.Size that specifies the targeted output size
 Shape:
 Input: \((N, C, D_{in}, H_{in}, W_{in})\)
 Output: \((N, C, D_{out}, H_{out}, W_{out})\) where
\(D_{out} = (D_{in}  1) * stride[0]  2 * padding[0] + kernel\_size[0]\)
\(H_{out} = (H_{in}  1) * stride[1]  2 * padding[1] + kernel\_size[1]\)
\(W_{out} = (W_{in}  1) * stride[2]  2 * padding[2] + kernel\_size[2]\)
or as given by
output_size
in the call operator
Example:
>>> # pool of square window of size=3, stride=2 >>> pool = nn.MaxPool3d(3, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool3d(3, stride=2) >>> output, indices = pool(Variable(torch.randn(20, 16, 51, 33, 15))) >>> unpooled_output = unpool(output, indices) >>> unpooled_output.size() torch.Size([20, 16, 51, 33, 15])
AvgPool1d¶

class
torch.nn.
AvgPool1d
(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]¶ Applies a 1D average pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, L)\), output \((N, C, L_{out})\) and
kernel_size
\(k\) can be precisely described as:\[\begin{array}{ll} out(N_i, C_j, l) = 1 / k * \sum_{{m}=0}^{k} input(N_i, C_j, stride * l + m) \end{array}\]Ifpadding
is nonzero, then the input is implicitly zeropadded on both sides forpadding
number of pointsThe parameters
kernel_size
,stride
,padding
can each be anint
or a oneelement tuple.Parameters:  kernel_size – the size of the window
 stride – the stride of the window. Default value is
kernel_size
 padding – implicit zero padding to be added on both sides
 ceil_mode – when True, will use ceil instead of floor to compute the output shape
 count_include_pad – when True, will include the zeropadding in the averaging calculation
 Shape:
 Input: \((N, C, L_{in})\)
 Output: \((N, C, L_{out})\) where \(L_{out} = floor((L_{in} + 2 * padding  kernel\_size) / stride + 1)\)
Examples:
>>> # pool with window of size=3, stride=2 >>> m = nn.AvgPool1d(3, stride=2) >>> m(Variable(torch.Tensor([[[1,2,3,4,5,6,7]]]))) Variable containing: (0 ,.,.) = 2 4 6 [torch.FloatTensor of size 1x1x3]
AvgPool2d¶

class
torch.nn.
AvgPool2d
(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]¶ Applies a 2D average pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, H, W)\), output \((N, C, H_{out}, W_{out})\) and
kernel_size
\((kH, kW)\) can be precisely described as:\[\begin{array}{ll} out(N_i, C_j, h, w) = 1 / (kH * kW) * \sum_{{m}=0}^{kH1} \sum_{{n}=0}^{kW1} input(N_i, C_j, stride[0] * h + m, stride[1] * w + n) \end{array}\]Ifpadding
is nonzero, then the input is implicitly zeropadded on both sides forpadding
number of pointsThe parameters
kernel_size
,stride
,padding
can either be: a single
int
– in which case the same value is used for the height and width dimension  a
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Parameters:  kernel_size – the size of the window
 stride – the stride of the window. Default value is
kernel_size
 padding – implicit zero padding to be added on both sides
 ceil_mode – when True, will use ceil instead of floor to compute the output shape
 count_include_pad – when True, will include the zeropadding in the averaging calculation
 Shape:
 Input: \((N, C, H_{in}, W_{in})\)
 Output: \((N, C, H_{out}, W_{out})\) where \(H_{out} = floor((H_{in} + 2 * padding[0]  kernel\_size[0]) / stride[0] + 1)\) \(W_{out} = floor((W_{in} + 2 * padding[1]  kernel\_size[1]) / stride[1] + 1)\)
Examples:
>>> # pool of square window of size=3, stride=2 >>> m = nn.AvgPool2d(3, stride=2) >>> # pool of nonsquare window >>> m = nn.AvgPool2d((3, 2), stride=(2, 1)) >>> input = autograd.Variable(torch.randn(20, 16, 50, 32)) >>> output = m(input)
 a single
AvgPool3d¶

class
torch.nn.
AvgPool3d
(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]¶ Applies a 3D average pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, D, H, W)\), output \((N, C, D_{out}, H_{out}, W_{out})\) and
kernel_size
\((kD, kH, kW)\) can be precisely described as:\[\begin{array}{ll} out(N_i, C_j, d, h, w) = 1 / (kD * kH * kW) * \sum_{{k}=0}^{kD1} \sum_{{m}=0}^{kH1} \sum_{{n}=0}^{kW1} input(N_i, C_j, stride[0] * d + k, stride[1] * h + m, stride[2] * w + n) \end{array}\]Ifpadding
is nonzero, then the input is implicitly zeropadded on all three sides forpadding
number of pointsThe parameters
kernel_size
,stride
can either be: a single
int
– in which case the same value is used for the depth, height and width dimension  a
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
Parameters:  kernel_size – the size of the window
 stride – the stride of the window. Default value is
kernel_size
 padding – implicit zero padding to be added on all three sides
 ceil_mode – when True, will use ceil instead of floor to compute the output shape
 count_include_pad – when True, will include the zeropadding in the averaging calculation
 Shape:
 Input: \((N, C, D_{in}, H_{in}, W_{in})\)
 Output: \((N, C, D_{out}, H_{out}, W_{out})\) where \(D_{out} = floor((D_{in} + 2 * padding[0]  kernel\_size[0]) / stride[0] + 1)\) \(H_{out} = floor((H_{in} + 2 * padding[1]  kernel\_size[1]) / stride[1] + 1)\) \(W_{out} = floor((W_{in} + 2 * padding[2]  kernel\_size[2]) / stride[2] + 1)\)
Examples:
>>> # pool of square window of size=3, stride=2 >>> m = nn.AvgPool3d(3, stride=2) >>> # pool of nonsquare window >>> m = nn.AvgPool3d((3, 2, 2), stride=(2, 1, 2)) >>> input = autograd.Variable(torch.randn(20, 16, 50,44, 31)) >>> output = m(input)
 a single
FractionalMaxPool2d¶

class
torch.nn.
FractionalMaxPool2d
(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)[source]¶ Applies a 2D fractional max pooling over an input signal composed of several input planes.
Fractiona MaxPooling is described in detail in the paper Fractional MaxPooling by Ben Graham
The maxpooling operation is applied in kHxkW regions by a stochastic step size determined by the target output size. The number of output features is equal to the number of input planes.
Parameters:  kernel_size – the size of the window to take a max over. Can be a single number k (for a square kernel of k x k) or a tuple (kh x kw)
 output_size – the target output size of the image of the form oH x oW. Can be a tuple (oH, oW) or a single number oH for a square image oH x oH
 output_ratio – If one wants to have an output size as a ratio of the input size, this option can be given. This has to be a number or tuple in the range (0, 1)
 return_indices – if
True
, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool2d. Default:False
Examples
>>> # pool of square window of size=3, and target output size 13x12 >>> m = nn.FractionalMaxPool2d(3, output_size=(13, 12)) >>> # pool of square window and target output size being half of input image size >>> m = nn.FractionalMaxPool2d(3, output_ratio=(0.5, 0.5)) >>> input = autograd.Variable(torch.randn(20, 16, 50, 32)) >>> output = m(input)
LPPool2d¶

class
torch.nn.
LPPool2d
(norm_type, kernel_size, stride=None, ceil_mode=False)[source]¶ Applies a 2D poweraverage pooling over an input signal composed of several input planes.
On each window, the function computed is: \(f(X) = pow(sum(pow(X, p)), 1/p)\)
 At p = infinity, one gets Max Pooling
 At p = 1, one gets Average Pooling
The parameters
kernel_size
,stride
can either be: a single
int
– in which case the same value is used for the height and width dimension  a
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Parameters:  kernel_size – the size of the window
 stride – the stride of the window. Default value is
kernel_size
 ceil_mode – when True, will use ceil instead of floor to compute the output shape
 Shape:
 Input: \((N, C, H_{in}, W_{in})\)
 Output: \((N, C, H_{out}, W_{out})\) where \(H_{out} = floor((H_{in} + 2 * padding[0]  dilation[0] * (kernel\_size[0]  1)  1) / stride[0] + 1)\) \(W_{out} = floor((W_{in} + 2 * padding[1]  dilation[1] * (kernel\_size[1]  1)  1) / stride[1] + 1)\)
Examples:
>>> # power2 pool of square window of size=3, stride=2 >>> m = nn.LPPool2d(2, 3, stride=2) >>> # pool of nonsquare window of power 1.2 >>> m = nn.LPPool2d(1.2, (3, 2), stride=(2, 1)) >>> input = autograd.Variable(torch.randn(20, 16, 50, 32)) >>> output = m(input)
AdaptiveMaxPool1d¶

class
torch.nn.
AdaptiveMaxPool1d
(output_size, return_indices=False)[source]¶ Applies a 1D adaptive max pooling over an input signal composed of several input planes.
The output size is H, for any input size. The number of output features is equal to the number of input planes.
Parameters:  output_size – the target output size H
 return_indices – if
True
, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool1d. Default:False
Examples
>>> # target output size of 5 >>> m = nn.AdaptiveMaxPool1d(5) >>> input = autograd.Variable(torch.randn(1, 64, 8)) >>> output = m(input)
AdaptiveMaxPool2d¶

class
torch.nn.
AdaptiveMaxPool2d
(output_size, return_indices=False)[source]¶ Applies a 2D adaptive max pooling over an input signal composed of several input planes.
The output is of size H x W, for any input size. The number of output features is equal to the number of input planes.
Parameters:  output_size – the target output size of the image of the form H x W.
Can be a tuple (H, W) or a single H for a square image H x H.
H and W can be either a
int
, orNone
which means the size will be the same as that of the input.  return_indices – if
True
, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool2d. Default:False
Examples
>>> # target output size of 5x7 >>> m = nn.AdaptiveMaxPool2d((5,7)) >>> input = autograd.Variable(torch.randn(1, 64, 8, 9)) >>> output = m(input) >>> # target output size of 7x7 (square) >>> m = nn.AdaptiveMaxPool2d(7) >>> input = autograd.Variable(torch.randn(1, 64, 10, 9)) >>> output = m(input) >>> # target output size of 10x7 >>> m = nn.AdaptiveMaxPool2d((None, 7)) >>> input = autograd.Variable(torch.randn(1, 64, 10, 9)) >>> output = m(input)
 output_size – the target output size of the image of the form H x W.
Can be a tuple (H, W) or a single H for a square image H x H.
H and W can be either a
AdaptiveMaxPool3d¶

class
torch.nn.
AdaptiveMaxPool3d
(output_size, return_indices=False)[source]¶ Applies a 3D adaptive max pooling over an input signal composed of several input planes.
The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes.
Parameters:  output_size – the target output size of the image of the form D x H x W.
Can be a tuple (D, H, W) or a single D for a cube D x D x D.
D, H and W can be either a
int
, orNone
which means the size will be the same as that of the input.  return_indices – if
True
, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool3d. Default:False
Examples
>>> # target output size of 5x7x9 >>> m = nn.AdaptiveMaxPool3d((5,7,9)) >>> input = autograd.Variable(torch.randn(1, 64, 8, 9, 10)) >>> output = m(input) >>> # target output size of 7x7x7 (cube) >>> m = nn.AdaptiveMaxPool3d(7) >>> input = autograd.Variable(torch.randn(1, 64, 10, 9, 8)) >>> output = m(input) >>> # target output size of 7x9x8 >>> m = nn.AdaptiveMaxPool3d((7, None, None)) >>> input = autograd.Variable(torch.randn(1, 64, 10, 9, 8)) >>> output = m(input)
 output_size – the target output size of the image of the form D x H x W.
Can be a tuple (D, H, W) or a single D for a cube D x D x D.
D, H and W can be either a
AdaptiveAvgPool1d¶

class
torch.nn.
AdaptiveAvgPool1d
(output_size)[source]¶ Applies a 1D adaptive average pooling over an input signal composed of several input planes.
The output size is H, for any input size. The number of output features is equal to the number of input planes.
Parameters: output_size – the target output size H Examples
>>> # target output size of 5 >>> m = nn.AdaptiveAvgPool1d(5) >>> input = autograd.Variable(torch.randn(1, 64, 8)) >>> output = m(input)
AdaptiveAvgPool2d¶

class
torch.nn.
AdaptiveAvgPool2d
(output_size)[source]¶ Applies a 2D adaptive average pooling over an input signal composed of several input planes.
The output is of size H x W, for any input size. The number of output features is equal to the number of input planes.
Parameters: output_size – the target output size of the image of the form H x W. Can be a tuple (H, W) or a single H for a square image H x H H and W can be either a int
, orNone
which means the size will be the same as that of the input.Examples
>>> # target output size of 5x7 >>> m = nn.AdaptiveAvgPool2d((5,7)) >>> input = autograd.Variable(torch.randn(1, 64, 8, 9)) >>> output = m(input) >>> # target output size of 7x7 (square) >>> m = nn.AdaptiveAvgPool2d(7) >>> input = autograd.Variable(torch.randn(1, 64, 10, 9)) >>> output = m(input) >>> # target output size of 10x7 >>> m = nn.AdaptiveMaxPool2d((None, 7)) >>> input = autograd.Variable(torch.randn(1, 64, 10, 9)) >>> output = m(input)
AdaptiveAvgPool3d¶

class
torch.nn.
AdaptiveAvgPool3d
(output_size)[source]¶ Applies a 3D adaptive average pooling over an input signal composed of several input planes.
The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes.
Parameters: output_size – the target output size of the form D x H x W. Can be a tuple (D, H, W) or a single number D for a cube D x D x D D, H and W can be either a int
, orNone
which means the size will be the same as that of the input.Examples
>>> # target output size of 5x7x9 >>> m = nn.AdaptiveAvgPool3d((5,7,9)) >>> input = autograd.Variable(torch.randn(1, 64, 8, 9, 10)) >>> output = m(input) >>> # target output size of 7x7x7 (cube) >>> m = nn.AdaptiveAvgPool3d(7) >>> input = autograd.Variable(torch.randn(1, 64, 10, 9, 8)) >>> output = m(input) >>> # target output size of 7x9x8 >>> m = nn.AdaptiveMaxPool3d((7, None, None)) >>> input = autograd.Variable(torch.randn(1, 64, 10, 9, 8)) >>> output = m(input)
Padding Layers¶
ReflectionPad2d¶

class
torch.nn.
ReflectionPad2d
(padding)[source]¶ Pads the input tensor using the reflection of the input boundary.
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom)  Shape:
 Input: \((N, C, H_{in}, W_{in})\)
 Output: \((N, C, H_{out}, W_{out})\) where \(H_{out} = H_{in} + paddingTop + paddingBottom\) \(W_{out} = W_{in} + paddingLeft + paddingRight\)
Examples:
>>> m = nn.ReflectionPad2d(3) >>> input = autograd.Variable(torch.randn(16, 3, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ReflectionPad2d((3, 3, 6, 6)) >>> output = m(input)
ReplicationPad2d¶

class
torch.nn.
ReplicationPad2d
(padding)[source]¶ Pads the input tensor using replication of the input boundary.
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom)  Shape:
 Input: \((N, C, H_{in}, W_{in})\)
 Output: \((N, C, H_{out}, W_{out})\) where \(H_{out} = H_{in} + paddingTop + paddingBottom\) \(W_{out} = W_{in} + paddingLeft + paddingRight\)
Examples:
>>> m = nn.ReplicationPad2d(3) >>> input = autograd.Variable(torch.randn(16, 3, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ReplicationPad2d((3, 3, 6, 6)) >>> output = m(input)
ReplicationPad3d¶

class
torch.nn.
ReplicationPad3d
(padding)[source]¶ Pads the input tensor using replication of the input boundary.
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 6tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom, paddingFront, paddingBack)  Shape:
 Input: \((N, C, D_{in}, H_{in}, W_{in})\)
 Output: \((N, C, D_{out}, H_{out}, W_{out})\) where \(D_{out} = D_{in} + paddingFront + paddingBack\) \(H_{out} = H_{in} + paddingTop + paddingBottom\) \(W_{out} = W_{in} + paddingLeft + paddingRight\)
Examples:
>>> m = nn.ReplicationPad3d(3) >>> input = autograd.Variable(torch.randn(16, 3, 8, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ReplicationPad3d((3, 3, 6, 6, 1, 1)) >>> output = m(input)
ZeroPad2d¶

class
torch.nn.
ZeroPad2d
(padding)[source]¶ Pads the input tensor boundaries with zero.
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom)  Shape:
 Input: \((N, C, H_{in}, W_{in})\)
 Output: \((N, C, H_{out}, W_{out})\) where \(H_{out} = H_{in} + paddingTop + paddingBottom\) \(W_{out} = W_{in} + paddingLeft + paddingRight\)
Examples:
>>> m = nn.ZeroPad2d(3) >>> input = autograd.Variable(torch.randn(16, 3, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ZeroPad2d((3, 3, 6, 6)) >>> output = m(input)
ConstantPad2d¶

class
torch.nn.
ConstantPad2d
(padding, value)[source]¶ Pads the input tensor boundaries with a constant value.
For Ndpadding, use nn.functional.pad().
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom)  Shape:
 Input: \((N, C, H_{in}, W_{in})\)
 Output: \((N, C, H_{out}, W_{out})\) where \(H_{out} = H_{in} + paddingTop + paddingBottom\) \(W_{out} = W_{in} + paddingLeft + paddingRight\)
Examples:
>>> m = nn.ConstantPad2d(3, 3.5) >>> input = autograd.Variable(torch.randn(16, 3, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ConstantPad2d((3, 3, 6, 6), 3.5) >>> output = m(input)
Nonlinear Activations¶
ReLU¶

class
torch.nn.
ReLU
(inplace=False)[source]¶ Applies the rectified linear unit function elementwise \({ReLU}(x)= max(0, x)\)
Parameters: inplace – can optionally do the operation inplace. Default: False
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.ReLU() >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
ReLU6¶

class
torch.nn.
ReLU6
(inplace=False)[source]¶ Applies the elementwise function \({ReLU6}(x) = min(max(0,x), 6)\)
Parameters: inplace – can optionally do the operation inplace. Default: False
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.ReLU6() >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
ELU¶

class
torch.nn.
ELU
(alpha=1.0, inplace=False)[source]¶ Applies elementwise, \(f(x) = max(0,x) + min(0, alpha * (exp(x)  1))\)
Parameters:  alpha – the alpha value for the ELU formulation. Default: 1.0
 inplace – can optionally do the operation inplace. Default:
False
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.ELU() >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
SELU¶

class
torch.nn.
SELU
(inplace=False)[source]¶ Applies elementwise, \(f(x) = scale * (\max(0,x) + \min(0, alpha * (\exp(x)  1)))\), with
alpha=1.6732632423543772848170429916717
andscale=1.0507009873554804934193349852946
.More details can be found in the paper SelfNormalizing Neural Networks .
Parameters: inplace (bool, optional) – can optionally do the operation inplace. Default: False
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.SELU() >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
PReLU¶

class
torch.nn.
PReLU
(num_parameters=1, init=0.25)[source]¶ Applies elementwise the function \(PReLU(x) = max(0,x) + a * min(0,x)\) Here “a” is a learnable parameter. When called without arguments, nn.PReLU() uses a single parameter “a” across all input channels. If called with nn.PReLU(nChannels), a separate “a” is used for each input channel.
Note
weight decay should not be used when learning “a” for good performance.
Parameters:  num_parameters – number of “a” to learn. Default: 1
 init – the initial value of “a”. Default: 0.25
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.PReLU() >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
LeakyReLU¶

class
torch.nn.
LeakyReLU
(negative_slope=0.01, inplace=False)[source]¶ Applies elementwise, \(f(x) = max(0, x) + {negative\_slope} * min(0, x)\)
Parameters:  negative_slope – Controls the angle of the negative slope. Default: 1e2
 inplace – can optionally do the operation inplace. Default:
False
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.LeakyReLU(0.1) >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
Threshold¶

class
torch.nn.
Threshold
(threshold, value, inplace=False)[source]¶ Thresholds each element of the input Tensor
Threshold is defined as:
y = x if x > threshold value if x <= threshold
Parameters:  threshold – The value to threshold at
 value – The value to replace with
 inplace – can optionally do the operation inplace. Default:
False
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Threshold(0.1, 20) >>> input = Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
Hardtanh¶

class
torch.nn.
Hardtanh
(min_val=1, max_val=1, inplace=False, min_value=None, max_value=None)[source]¶ Applies the HardTanh function elementwise
HardTanh is defined as:
f(x) = +1, if x > 1 f(x) = 1, if x < 1 f(x) = x, otherwise
The range of the linear region \([1, 1]\) can be adjusted
Parameters:  min_val – minimum value of the linear region range. Default: 1
 max_val – maximum value of the linear region range. Default: 1
 inplace – can optionally do the operation inplace. Default:
False
Keyword arguments
min_value
andmax_value
have been deprecated in favor ofmin_val
andmax_val
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Hardtanh(2, 2) >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
Sigmoid¶

class
torch.nn.
Sigmoid
[source]¶ Applies the elementwise function \(f(x) = 1 / ( 1 + exp(x))\)
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Sigmoid() >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
Tanh¶

class
torch.nn.
Tanh
[source]¶ Applies elementwise, \(f(x) = (exp(x)  exp(x)) / (exp(x) + exp(x))\)
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Tanh() >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
LogSigmoid¶

class
torch.nn.
LogSigmoid
[source]¶ Applies elementwise \(LogSigmoid(x) = log( 1 / (1 + exp(x_i)))\)
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.LogSigmoid() >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
Softplus¶

class
torch.nn.
Softplus
(beta=1, threshold=20)[source]¶ Applies elementwise \(f(x) = 1/beta * log(1 + exp(beta * x_i))\)
SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive.
For numerical stability the implementation reverts to the linear function for inputs above a certain value.
Parameters:  beta – the beta value for the Softplus formulation. Default: 1
 threshold – values above this revert to a linear function. Default: 20
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Softplus() >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
Softshrink¶

class
torch.nn.
Softshrink
(lambd=0.5)[source]¶ Applies the soft shrinkage function elementwise
SoftShrinkage operator is defined as:
f(x) = xlambda, if x > lambda > f(x) = x+lambda, if x < lambda f(x) = 0, otherwise
Parameters: lambd – the lambda value for the Softshrink formulation. Default: 0.5  Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Softshrink() >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
Softsign¶

class
torch.nn.
Softsign
[source]¶ Applies elementwise, the function \(f(x) = x / (1 + x)\)
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Softsign() >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
Tanhshrink¶

class
torch.nn.
Tanhshrink
[source]¶ Applies elementwise, \(Tanhshrink(x) = x  Tanh(x)\)
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Output: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Tanhshrink() >>> input = autograd.Variable(torch.randn(2)) >>> print(input) >>> print(m(input))
Softmin¶

class
torch.nn.
Softmin
(dim=None)[source]¶ Applies the Softmin function to an ndimensional input Tensor rescaling them so that the elements of the ndimensional output Tensor lie in the range (0, 1) and sum to 1
\(f(x) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}\)
 Shape:
 Input: any shape
 Output: same as input
Parameters: dim (int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1). Returns: a Tensor of the same dimension and shape as the input, with values in the range [0, 1] Examples:
>>> m = nn.Softmin() >>> input = autograd.Variable(torch.randn(2, 3)) >>> print(input) >>> print(m(input))
Softmax¶

class
torch.nn.
Softmax
(dim=None)[source]¶ Applies the Softmax function to an ndimensional input Tensor rescaling them so that the elements of the ndimensional output Tensor lie in the range (0,1) and sum to 1
Softmax is defined as \(f_i(x) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}\)
 Shape:
 Input: any shape
 Output: same as input
Returns: a Tensor of the same dimension and shape as the input with values in the range [0, 1] Parameters: dim (int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1). Note
This module doesn’t work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use Logsoftmax instead (it’s faster and has better numerical properties).
Examples:
>>> m = nn.Softmax() >>> input = autograd.Variable(torch.randn(2, 3)) >>> print(input) >>> print(m(input))
Softmax2d¶

class
torch.nn.
Softmax2d
[source]¶ Applies SoftMax over features to each spatial location
When given an image of Channels x Height x Width, it will
apply Softmax to each location \((Channels, h_i, w_j)\)
 Shape:
 Input: \((N, C, H, W)\)
 Output: \((N, C, H, W)\) (same shape as input)
Returns: a Tensor of the same dimension and shape as the input with values in the range [0, 1] Examples:
>>> m = nn.Softmax2d() >>> # you softmax over the 2nd dimension >>> input = autograd.Variable(torch.randn(2, 3, 12, 13)) >>> print(input) >>> print(m(input))
LogSoftmax¶

class
torch.nn.
LogSoftmax
(dim=None)[source]¶ Applies the Log(Softmax(x)) function to an ndimensional input Tensor. The LogSoftmax formulation can be simplified as
\(f_i(x) = log(exp(x_i) / sum_j exp(x_j) )\)
 Shape:
 Input: any shape
 Output: same as input
Parameters: dim (int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1). Returns: a Tensor of the same dimension and shape as the input with values in the range [inf, 0) Examples:
>>> m = nn.LogSoftmax() >>> input = autograd.Variable(torch.randn(2, 3)) >>> print(input) >>> print(m(input))
Normalization layers¶
BatchNorm1d¶

class
torch.nn.
BatchNorm1d
(num_features, eps=1e05, momentum=0.1, affine=True)[source]¶ Applies Batch Normalization over a 2d or 3d input that is seen as a minibatch.
\[y = \frac{x  mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta\]The mean and standarddeviation are calculated perdimension over the minibatches and gamma and beta are learnable parameter vectors of size C (where C is the input size).
During training, this layer keeps a running estimate of its computed mean and variance. The running sum is kept with a default momentum of 0.1.
During evaluation, this running mean/variance is used for normalization.
Because the BatchNorm is done over the C dimension, computing statistics on (N, L) slices, it’s common terminology to call this Temporal BatchNorm
Parameters:  num_features – num_features from an expected input of size batch_size x num_features [x width]
 eps – a value added to the denominator for numerical stability. Default: 1e5
 momentum – the value used for the running_mean and running_var computation. Default: 0.1
 affine – a boolean value that when set to
True
, gives the layer learnable affine parameters. Default:True
 Shape:
 Input: \((N, C)\) or \((N, C, L)\)
 Output: \((N, C)\) or \((N, C, L)\) (same shape as input)
Examples
>>> # With Learnable Parameters >>> m = nn.BatchNorm1d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm1d(100, affine=False) >>> input = autograd.Variable(torch.randn(20, 100)) >>> output = m(input)
BatchNorm2d¶

class
torch.nn.
BatchNorm2d
(num_features, eps=1e05, momentum=0.1, affine=True)[source]¶ Applies Batch Normalization over a 4d input that is seen as a minibatch of 3d inputs
\[y = \frac{x  mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta\]The mean and standarddeviation are calculated perdimension over the minibatches and gamma and beta are learnable parameter vectors of size C (where C is the input size).
During training, this layer keeps a running estimate of its computed mean and variance. The running sum is kept with a default momentum of 0.1.
During evaluation, this running mean/variance is used for normalization.
Because the BatchNorm is done over the C dimension, computing statistics on (N, H, W) slices, it’s common terminology to call this Spatial BatchNorm
Parameters:  num_features – num_features from an expected input of size batch_size x num_features x height x width
 eps – a value added to the denominator for numerical stability. Default: 1e5
 momentum – the value used for the running_mean and running_var computation. Default: 0.1
 affine – a boolean value that when set to
True
, gives the layer learnable affine parameters. Default:True
 Shape:
 Input: \((N, C, H, W)\)
 Output: \((N, C, H, W)\) (same shape as input)
Examples
>>> # With Learnable Parameters >>> m = nn.BatchNorm2d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm2d(100, affine=False) >>> input = autograd.Variable(torch.randn(20, 100, 35, 45)) >>> output = m(input)
BatchNorm3d¶

class
torch.nn.
BatchNorm3d
(num_features, eps=1e05, momentum=0.1, affine=True)[source]¶ Applies Batch Normalization over a 5d input that is seen as a minibatch of 4d inputs
\[y = \frac{x  mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta\]The mean and standarddeviation are calculated perdimension over the minibatches and gamma and beta are learnable parameter vectors of size C (where C is the input size).
During training, this layer keeps a running estimate of its computed mean and variance. The running sum is kept with a default momentum of 0.1.
During evaluation, this running mean/variance is used for normalization.
Because the BatchNorm is done over the C dimension, computing statistics on (N, D, H, W) slices, it’s common terminology to call this Volumetric BatchNorm or Spatiotemporal BatchNorm
Parameters:  num_features – num_features from an expected input of size batch_size x num_features x depth x height x width
 eps – a value added to the denominator for numerical stability. Default: 1e5
 momentum – the value used for the running_mean and running_var computation. Default: 0.1
 affine – a boolean value that when set to
True
, gives the layer learnable affine parameters. Default:True
 Shape:
 Input: \((N, C, D, H, W)\)
 Output: \((N, C, D, H, W)\) (same shape as input)
Examples
>>> # With Learnable Parameters >>> m = nn.BatchNorm3d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm3d(100, affine=False) >>> input = autograd.Variable(torch.randn(20, 100, 35, 45, 10)) >>> output = m(input)
InstanceNorm1d¶

class
torch.nn.
InstanceNorm1d
(num_features, eps=1e05, momentum=0.1, affine=False)[source]¶ Applies Instance Normalization over a 3d input that is seen as a minibatch.
\[y = \frac{x  mean[x]}{ \sqrt{Var[x]} + \epsilon} * gamma + beta\]The mean and standarddeviation are calculated perdimension separately for each object in a minibatch. Gamma and beta are learnable parameter vectors of size C (where C is the input size).
During training, this layer keeps a running estimate of its computed mean and variance. The running sum is kept with a default momentum of 0.1.
At evaluation time (.eval()), the default behaviour of the InstanceNorm module stays the same i.e. running mean/variance is NOT used for normalization. One can force using stored mean and variance with .use_running_stats(mode=True) method, and switch back to normal behavior with .use_running_stats(mode=False) method.
Parameters:  num_features – num_features from an expected input of size batch_size x num_features x width
 eps – a value added to the denominator for numerical stability. Default: 1e5
 momentum – the value used for the running_mean and running_var computation. Default: 0.1
 affine – a boolean value that when set to
True
, gives the layer learnable affine parameters. Default:False
 Shape:
 Input: \((N, C, L)\)
 Output: \((N, C, L)\) (same shape as input)
Examples
>>> # Without Learnable Parameters >>> m = nn.InstanceNorm1d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm1d(100, affine=True) >>> input = autograd.Variable(torch.randn(20, 100, 40)) >>> output = m(input)
InstanceNorm2d¶

class
torch.nn.
InstanceNorm2d
(num_features, eps=1e05, momentum=0.1, affine=False)[source]¶ Applies Instance Normalization over a 4d input that is seen as a minibatch of 3d inputs
\[y = \frac{x  mean[x]}{ \sqrt{Var[x]} + \epsilon} * gamma + beta\]The mean and standarddeviation are calculated perdimension separately for each object in a minibatch. Gamma and beta are learnable parameter vectors of size C (where C is the input size).
During training, this layer keeps a running estimate of its computed mean and variance. The running sum is kept with a default momentum of 0.1.
At evaluation time (.eval()), the default behaviour of the InstanceNorm module stays the same i.e. running mean/variance is NOT used for normalization. One can force using stored mean and variance with .use_running_stats(mode=True) method, and switch back to normal behavior with .use_running_stats(mode=False) method.
Parameters:  num_features – num_features from an expected input of size batch_size x num_features x height x width
 eps – a value added to the denominator for numerical stability. Default: 1e5
 momentum – the value used for the running_mean and running_var computation. Default: 0.1
 affine – a boolean value that when set to
True
, gives the layer learnable affine parameters. Default:False
 Shape:
 Input: \((N, C, H, W)\)
 Output: \((N, C, H, W)\) (same shape as input)
Examples
>>> # Without Learnable Parameters >>> m = nn.InstanceNorm2d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm2d(100, affine=True) >>> input = autograd.Variable(torch.randn(20, 100, 35, 45)) >>> output = m(input)
InstanceNorm3d¶

class
torch.nn.
InstanceNorm3d
(num_features, eps=1e05, momentum=0.1, affine=False)[source]¶ Applies Instance Normalization over a 5d input that is seen as a minibatch of 4d inputs
\[y = \frac{x  mean[x]}{ \sqrt{Var[x]} + \epsilon} * gamma + beta\]The mean and standarddeviation are calculated perdimension separately for each object in a minibatch. Gamma and beta are learnable parameter vectors of size C (where C is the input size).
During training, this layer keeps a running estimate of its computed mean and variance. The running sum is kept with a default momentum of 0.1.
At evaluation time (.eval()), the default behaviour of the InstanceNorm module stays the same i.e. running mean/variance is NOT used for normalization. One can force using stored mean and variance with .use_running_stats(mode=True) method, and switch back to normal behavior with .use_running_stats(mode=False) method.
Parameters:  num_features – num_features from an expected input of size batch_size x num_features x depth x height x width
 eps – a value added to the denominator for numerical stability. Default: 1e5
 momentum – the value used for the running_mean and running_var computation. Default: 0.1
 affine – a boolean value that when set to
True
, gives the layer learnable affine parameters. Default:False
 Shape:
 Input: \((N, C, D, H, W)\)
 Output: \((N, C, D, H, W)\) (same shape as input)
Examples
>>> # Without Learnable Parameters >>> m = nn.InstanceNorm3d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm3d(100, affine=True) >>> input = autograd.Variable(torch.randn(20, 100, 35, 45, 10)) >>> output = m(input)
Recurrent layers¶
RNN¶

class
torch.nn.
RNN
(*args, **kwargs)[source]¶ Applies a multilayer Elman RNN with tanh or ReLU nonlinearity to an input sequence.
For each element in the input sequence, each layer computes the following function:
\[h_t = \tanh(w_{ih} * x_t + b_{ih} + w_{hh} * h_{(t1)} + b_{hh})\]where \(h_t\) is the hidden state at time t, and \(x_t\) is the hidden state of the previous layer at time t or \(input_t\) for the first layer. If nonlinearity=’relu’, then ReLU is used instead of tanh.
Parameters:  input_size – The number of expected features in the input x
 hidden_size – The number of features in the hidden state h
 num_layers – Number of recurrent layers.
 nonlinearity – The nonlinearity to use [‘tanh’’relu’]. Default: ‘tanh’
 bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
 batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature)  dropout – If nonzero, introduces a dropout layer on the outputs of each RNN layer except the last layer
 bidirectional – If
True
, becomes a bidirectional RNN. Default:False
 Inputs: input, h_0
 input (seq_len, batch, input_size): tensor containing the features
of the input sequence. The input can also be a packed variable length
sequence. See
torch.nn.utils.rnn.pack_padded_sequence()
for details.  h_0 (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.
 input (seq_len, batch, input_size): tensor containing the features
of the input sequence. The input can also be a packed variable length
sequence. See
 Outputs: output, h_n
 output (seq_len, batch, hidden_size * num_directions): tensor
containing the output features (h_k) from the last layer of the RNN,
for each k. If a
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence.  h_n (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for k=seq_len.
 output (seq_len, batch, hidden_size * num_directions): tensor
containing the output features (h_k) from the last layer of the RNN,
for each k. If a
Variables:  weight_ih_l[k] – the learnable inputhidden weights of the kth layer, of shape (hidden_size x input_size) for k=0. Otherwise, the shape is (hidden_size x hidden_size)
 weight_hh_l[k] – the learnable hiddenhidden weights of the kth layer, of shape (hidden_size x hidden_size)
 bias_ih_l[k] – the learnable inputhidden bias of the kth layer, of shape (hidden_size)
 bias_hh_l[k] – the learnable hiddenhidden bias of the kth layer, of shape (hidden_size)
Examples:
>>> rnn = nn.RNN(10, 20, 2) >>> input = Variable(torch.randn(5, 3, 10)) >>> h0 = Variable(torch.randn(2, 3, 20)) >>> output, hn = rnn(input, h0)
LSTM¶

class
torch.nn.
LSTM
(*args, **kwargs)[source]¶ Applies a multilayer long shortterm memory (LSTM) RNN to an input sequence.
For each element in the input sequence, each layer computes the following function:
\[\begin{split}\begin{array}{ll} i_t = \mathrm{sigmoid}(W_{ii} x_t + b_{ii} + W_{hi} h_{(t1)} + b_{hi}) \\ f_t = \mathrm{sigmoid}(W_{if} x_t + b_{if} + W_{hf} h_{(t1)} + b_{hf}) \\ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hc} h_{(t1)} + b_{hg}) \\ o_t = \mathrm{sigmoid}(W_{io} x_t + b_{io} + W_{ho} h_{(t1)} + b_{ho}) \\ c_t = f_t * c_{(t1)} + i_t * g_t \\ h_t = o_t * \tanh(c_t) \end{array}\end{split}\]where \(h_t\) is the hidden state at time t, \(c_t\) is the cell state at time t, \(x_t\) is the hidden state of the previous layer at time t or \(input_t\) for the first layer, and \(i_t\), \(f_t\), \(g_t\), \(o_t\) are the input, forget, cell, and out gates, respectively.
Parameters:  input_size – The number of expected features in the input x
 hidden_size – The number of features in the hidden state h
 num_layers – Number of recurrent layers.
 bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
 batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature)  dropout – If nonzero, introduces a dropout layer on the outputs of each RNN layer except the last layer
 bidirectional – If
True
, becomes a bidirectional RNN. Default:False
 Inputs: input, (h_0, c_0)
input (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See
torch.nn.utils.rnn.pack_padded_sequence()
for details.h_0 (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch.
c_0 (num_layers * num_directions, batch, hidden_size): tensor containing the initial cell state for each element in the batch.
If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.
 Outputs: output, (h_n, c_n)
 output (seq_len, batch, hidden_size * num_directions): tensor
containing the output features (h_t) from the last layer of the RNN,
for each t. If a
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence.  h_n (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t=seq_len
 c_n (num_layers * num_directions, batch, hidden_size): tensor containing the cell state for t=seq_len
 output (seq_len, batch, hidden_size * num_directions): tensor
containing the output features (h_t) from the last layer of the RNN,
for each t. If a
Variables:  weight_ih_l[k] – the learnable inputhidden weights of the kth layer (W_iiW_ifW_igW_io), of shape (4*hidden_size x input_size)
 weight_hh_l[k] – the learnable hiddenhidden weights of the kth layer (W_hiW_hfW_hgW_ho), of shape (4*hidden_size x hidden_size)
 bias_ih_l[k] – the learnable inputhidden bias of the kth layer (b_iib_ifb_igb_io), of shape (4*hidden_size)
 bias_hh_l[k] – the learnable hiddenhidden bias of the kth layer (b_hib_hfb_hgb_ho), of shape (4*hidden_size)
Examples:
>>> rnn = nn.LSTM(10, 20, 2) >>> input = Variable(torch.randn(5, 3, 10)) >>> h0 = Variable(torch.randn(2, 3, 20)) >>> c0 = Variable(torch.randn(2, 3, 20)) >>> output, hn = rnn(input, (h0, c0))
GRU¶

class
torch.nn.
GRU
(*args, **kwargs)[source]¶ Applies a multilayer gated recurrent unit (GRU) RNN to an input sequence.
For each element in the input sequence, each layer computes the following function:
\[\begin{split}\begin{array}{ll} r_t = \mathrm{sigmoid}(W_{ir} x_t + b_{ir} + W_{hr} h_{(t1)} + b_{hr}) \\ z_t = \mathrm{sigmoid}(W_{iz} x_t + b_{iz} + W_{hz} h_{(t1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t1)}+ b_{hn})) \\ h_t = (1  z_t) * n_t + z_t * h_{(t1)} \\ \end{array}\end{split}\]where \(h_t\) is the hidden state at time t, \(x_t\) is the hidden state of the previous layer at time t or \(input_t\) for the first layer, and \(r_t\), \(z_t\), \(n_t\) are the reset, input, and new gates, respectively.
Parameters:  input_size – The number of expected features in the input x
 hidden_size – The number of features in the hidden state h
 num_layers – Number of recurrent layers.
 bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
 batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature)  dropout – If nonzero, introduces a dropout layer on the outputs of each RNN layer except the last layer
 bidirectional – If
True
, becomes a bidirectional RNN. Default:False
 Inputs: input, h_0
 input (seq_len, batch, input_size): tensor containing the features
of the input sequence. The input can also be a packed variable length
sequence. See
torch.nn.utils.rnn.pack_padded_sequence()
for details.  h_0 (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.
 input (seq_len, batch, input_size): tensor containing the features
of the input sequence. The input can also be a packed variable length
sequence. See
 Outputs: output, h_n
 output (seq_len, batch, hidden_size * num_directions): tensor
containing the output features h_t from the last layer of the RNN,
for each t. If a
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence.  h_n (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t=seq_len
 output (seq_len, batch, hidden_size * num_directions): tensor
containing the output features h_t from the last layer of the RNN,
for each t. If a
Variables:  weight_ih_l[k] – the learnable inputhidden weights of the kth layer (W_irW_izW_in), of shape (3*hidden_size x input_size)
 weight_hh_l[k] – the learnable hiddenhidden weights of the kth layer (W_hrW_hzW_hn), of shape (3*hidden_size x hidden_size)
 bias_ih_l[k] – the learnable inputhidden bias of the kth layer (b_irb_izb_in), of shape (3*hidden_size)
 bias_hh_l[k] – the learnable hiddenhidden bias of the kth layer (b_hrb_hzb_hn), of shape (3*hidden_size)
Examples:
>>> rnn = nn.GRU(10, 20, 2) >>> input = Variable(torch.randn(5, 3, 10)) >>> h0 = Variable(torch.randn(2, 3, 20)) >>> output, hn = rnn(input, h0)
RNNCell¶

class
torch.nn.
RNNCell
(input_size, hidden_size, bias=True, nonlinearity='tanh')[source]¶ An Elman RNN cell with tanh or ReLU nonlinearity.
\[h' = \tanh(w_{ih} * x + b_{ih} + w_{hh} * h + b_{hh})\]If nonlinearity=’relu’, then ReLU is used in place of tanh.
Parameters:  input_size – The number of expected features in the input x
 hidden_size – The number of features in the hidden state h
 bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
 nonlinearity – The nonlinearity to use [‘tanh’’relu’]. Default: ‘tanh’
 Inputs: input, hidden
 input (batch, input_size): tensor containing input features
 hidden (batch, hidden_size): tensor containing the initial hidden state for each element in the batch.
 Outputs: h’
 h’ (batch, hidden_size): tensor containing the next hidden state for each element in the batch
Variables:  weight_ih – the learnable inputhidden weights, of shape (input_size x hidden_size)
 weight_hh – the learnable hiddenhidden weights, of shape (hidden_size x hidden_size)
 bias_ih – the learnable inputhidden bias, of shape (hidden_size)
 bias_hh – the learnable hiddenhidden bias, of shape (hidden_size)
Examples:
>>> rnn = nn.RNNCell(10, 20) >>> input = Variable(torch.randn(6, 3, 10)) >>> hx = Variable(torch.randn(3, 20)) >>> output = [] >>> for i in range(6): ... hx = rnn(input[i], hx) ... output.append(hx)
LSTMCell¶

class
torch.nn.
LSTMCell
(input_size, hidden_size, bias=True)[source]¶ A long shortterm memory (LSTM) cell.
\[\begin{split}\begin{array}{ll} i = \mathrm{sigmoid}(W_{ii} x + b_{ii} + W_{hi} h + b_{hi}) \\ f = \mathrm{sigmoid}(W_{if} x + b_{if} + W_{hf} h + b_{hf}) \\ g = \tanh(W_{ig} x + b_{ig} + W_{hc} h + b_{hg}) \\ o = \mathrm{sigmoid}(W_{io} x + b_{io} + W_{ho} h + b_{ho}) \\ c' = f * c + i * g \\ h' = o * \tanh(c') \\ \end{array}\end{split}\]Parameters:  input_size – The number of expected features in the input x
 hidden_size – The number of features in the hidden state h
 bias – If False, then the layer does not use bias weights b_ih and
b_hh. Default:
True
 Inputs: input, (h_0, c_0)
 input (batch, input_size): tensor containing input features
 h_0 (batch, hidden_size): tensor containing the initial hidden state for each element in the batch.
 c_0 (batch. hidden_size): tensor containing the initial cell state for each element in the batch.
 Outputs: h_1, c_1
 h_1 (batch, hidden_size): tensor containing the next hidden state for each element in the batch
 c_1 (batch, hidden_size): tensor containing the next cell state for each element in the batch
Variables:  weight_ih – the learnable inputhidden weights, of shape (4*hidden_size x input_size)
 weight_hh – the learnable hiddenhidden weights, of shape (4*hidden_size x hidden_size)
 bias_ih – the learnable inputhidden bias, of shape (4*hidden_size)
 bias_hh – the learnable hiddenhidden bias, of shape (4*hidden_size)
Examples:
>>> rnn = nn.LSTMCell(10, 20) >>> input = Variable(torch.randn(6, 3, 10)) >>> hx = Variable(torch.randn(3, 20)) >>> cx = Variable(torch.randn(3, 20)) >>> output = [] >>> for i in range(6): ... hx, cx = rnn(input[i], (hx, cx)) ... output.append(hx)
GRUCell¶

class
torch.nn.
GRUCell
(input_size, hidden_size, bias=True)[source]¶ A gated recurrent unit (GRU) cell
\[\begin{split}\begin{array}{ll} r = \mathrm{sigmoid}(W_{ir} x + b_{ir} + W_{hr} h + b_{hr}) \\ z = \mathrm{sigmoid}(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\ n = \tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn})) \\ h' = (1  z) * n + z * h \end{array}\end{split}\]Parameters:  input_size – The number of expected features in the input x
 hidden_size – The number of features in the hidden state h
 bias – If False, then the layer does not use bias weights b_ih and b_hh. Default: True
 Inputs: input, hidden
 input (batch, input_size): tensor containing input features
 hidden (batch, hidden_size): tensor containing the initial hidden state for each element in the batch.
 Outputs: h’
 h’: (batch, hidden_size): tensor containing the next hidden state for each element in the batch
Variables:  weight_ih – the learnable inputhidden weights, of shape (3*hidden_size x input_size)
 weight_hh – the learnable hiddenhidden weights, of shape (3*hidden_size x hidden_size)
 bias_ih – the learnable inputhidden bias, of shape (3*hidden_size)
 bias_hh – the learnable hiddenhidden bias, of shape (3*hidden_size)
Examples:
>>> rnn = nn.GRUCell(10, 20) >>> input = Variable(torch.randn(6, 3, 10)) >>> hx = Variable(torch.randn(3, 20)) >>> output = [] >>> for i in range(6): ... hx = rnn(input[i], hx) ... output.append(hx)
Linear layers¶
Linear¶

class
torch.nn.
Linear
(in_features, out_features, bias=True)[source]¶ Applies a linear transformation to the incoming data: \(y = Ax + b\)
Parameters:  in_features – size of each input sample
 out_features – size of each output sample
 bias – If set to False, the layer will not learn an additive bias.
Default:
True
 Shape:
 Input: \((N, *, in\_features)\) where * means any number of additional dimensions
 Output: \((N, *, out\_features)\) where all but the last dimension are the same shape as the input.
Variables:  weight – the learnable weights of the module of shape (out_features x in_features)
 bias – the learnable bias of the module of shape (out_features)
Examples:
>>> m = nn.Linear(20, 30) >>> input = autograd.Variable(torch.randn(128, 20)) >>> output = m(input) >>> print(output.size())
Bilinear¶

class
torch.nn.
Bilinear
(in1_features, in2_features, out_features, bias=True)[source]¶ Applies a bilinear transformation to the incoming data: \(y = x_1 * A * x_2 + b\)
Parameters:  in1_features – size of each first input sample
 in2_features – size of each second input sample
 out_features – size of each output sample
 bias – If set to False, the layer will not learn an additive bias.
Default:
True
 Shape:
 Input: \((N, in1\_features)\), \((N, in2\_features)\)
 Output: \((N, out\_features)\)
Variables:  weight – the learnable weights of the module of shape (out_features x in1_features x in2_features)
 bias – the learnable bias of the module of shape (out_features)
Examples:
>>> m = nn.Bilinear(20, 30, 40) >>> input1 = autograd.Variable(torch.randn(128, 20)) >>> input2 = autograd.Variable(torch.randn(128, 30)) >>> output = m(input1, input2) >>> print(output.size())
Dropout layers¶
Dropout¶

class
torch.nn.
Dropout
(p=0.5, inplace=False)[source]¶ During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a bernoulli distribution. The elements to zero are randomized on every forward call.
This has proven to be an effective technique for regularization and preventing the coadaptation of neurons as described in the paper Improving neural networks by preventing coadaptation of feature detectors .
Furthermore, the outputs are scaled by a factor of 1/(1p) during training. This means that during evaluation the module simply computes an identity function.
Parameters:  p – probability of an element to be zeroed. Default: 0.5
 inplace – If set to
True
, will do this operation inplace. Default:False
 Shape:
 Input: Any. Input can be of any shape
 Output: Same. Output is of the same shape as input
Examples:
>>> m = nn.Dropout(p=0.2) >>> input = autograd.Variable(torch.randn(20, 16)) >>> output = m(input)
Dropout2d¶

class
torch.nn.
Dropout2d
(p=0.5, inplace=False)[source]¶ Randomly zeroes whole channels of the input tensor. The channels to zeroout are randomized on every forward call.
Usually the input comes from Conv2d modules.
As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then iid dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case,
nn.Dropout2d()
will help promote independence between feature maps and should be used instead.Parameters:  Shape:
 Input: \((N, C, H, W)\)
 Output: \((N, C, H, W)\) (same shape as input)
Examples:
>>> m = nn.Dropout2d(p=0.2) >>> input = autograd.Variable(torch.randn(20, 16, 32, 32)) >>> output = m(input)
Dropout3d¶

class
torch.nn.
Dropout3d
(p=0.5, inplace=False)[source]¶ Randomly zeroes whole channels of the input tensor. The channels to zero are randomized on every forward call.
Usually the input comes from Conv3d modules.
As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then iid dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case,
nn.Dropout3d()
will help promote independence between feature maps and should be used instead.Parameters:  Shape:
 Input: \((N, C, D, H, W)\)
 Output: \((N, C, D, H, W)\) (same shape as input)
Examples:
>>> m = nn.Dropout3d(p=0.2) >>> input = autograd.Variable(torch.randn(20, 16, 4, 32, 32)) >>> output = m(input)
AlphaDropout¶

class
torch.nn.
AlphaDropout
(p=0.5)[source]¶ Applies Alpha Dropout over the input.
Alpha Dropout is a type of Dropout that maintains the selfnormalizing property. For an input with zero mean and unit standard deviation, the output of Alpha Dropout maintains the original mean and standard deviation of the input. Alpha Dropout goes handinhand with SELU activation function, which ensures that the outputs have zero mean and unit standard deviation.
During training, it randomly masks some of the elements of the input tensor with probability p using samples from a bernoulli distribution. The elements to masked are randomized on every forward call, and scaled and shifted to maintain zero mean and unit standard deviation.
During evaluation the module simply computes an identity function.
More details can be found in the paper SelfNormalizing Neural Networks .
Parameters: p (float) – probability of an element to be dropped. Default: 0.5  Shape:
 Input: Any. Input can be of any shape
 Output: Same. Output is of the same shape as input
Examples:
>>> m = nn.AlphaDropout(p=0.2) >>> input = autograd.Variable(torch.randn(20, 16)) >>> output = m(input)
Sparse layers¶
Embedding¶

class
torch.nn.
Embedding
(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, sparse=False)[source]¶ A simple lookup table that stores embeddings of a fixed dictionary and size.
This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.
Parameters:  num_embeddings (int) – size of the dictionary of embeddings
 embedding_dim (int) – the size of each embedding vector
 padding_idx (int, optional) – If given, pads the output with zeros whenever it encounters the index.
 max_norm (float, optional) – If given, will renormalize the embeddings to always have a norm lesser than this
 norm_type (float, optional) – The p of the pnorm to compute for the max_norm option
 scale_grad_by_freq (boolean, optional) – if given, this will scale gradients by the frequency of the words in the minibatch.
 sparse (boolean, optional) – if
True
, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for more details regarding sparse gradients.
Variables: weight (Tensor) – the learnable weights of the module of shape (num_embeddings, embedding_dim)
 Shape:
 Input: LongTensor (N, W), N = minibatch, W = number of indices to extract per minibatch
 Output: (N, W, embedding_dim)
Notes
Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s optim.SGD (cuda and cpu), optim.SparseAdam (cuda and cpu) and optim.Adagrad (cpu)
Examples:
>>> # an Embedding module containing 10 tensors of size 3 >>> embedding = nn.Embedding(10, 3) >>> # a batch of 2 samples of 4 indices each >>> input = Variable(torch.LongTensor([[1,2,4,5],[4,3,2,9]])) >>> embedding(input) Variable containing: (0 ,.,.) = 1.0822 1.2522 0.2434 0.8393 0.6062 0.3348 0.6597 0.0350 0.0837 0.5521 0.9447 0.0498 (1 ,.,.) = 0.6597 0.0350 0.0837 0.1527 0.0877 0.4260 0.8393 0.6062 0.3348 0.8738 0.9054 0.4281 [torch.FloatTensor of size 2x4x3] >>> # example with padding_idx >>> embedding = nn.Embedding(10, 3, padding_idx=0) >>> input = Variable(torch.LongTensor([[0,2,0,5]])) >>> embedding(input) Variable containing: (0 ,.,.) = 0.0000 0.0000 0.0000 0.3452 0.4937 0.9361 0.0000 0.0000 0.0000 0.0706 2.1962 0.6276 [torch.FloatTensor of size 1x4x3]
EmbeddingBag¶

class
torch.nn.
EmbeddingBag
(num_embeddings, embedding_dim, max_norm=None, norm_type=2, scale_grad_by_freq=False, mode='mean')[source]¶ Computes sums or means of ‘bags’ of embeddings, without instantiating the intermediate embeddings.
 For bags of constant length,
 nn.EmbeddingBag with mode=sum is equivalent to nn.Embedding followed by torch.sum(dim=1)
 with mode=mean is equivalent to nn.Embedding followed by torch.mean(dim=1)
However, nn.EmbeddingBag is much more time and memory efficient than using a chain of these operations.
Parameters:  num_embeddings (int) – size of the dictionary of embeddings
 embedding_dim (int) – the size of each embedding vector
 max_norm (float, optional) – If given, will renormalize the embeddings to always have a norm lesser than this
 norm_type (float, optional) – The p of the pnorm to compute for the max_norm option
 scale_grad_by_freq (boolean, optional) – if given, this will scale gradients by the frequency of the words in the dictionary.
 mode (string, optional) – ‘sum’  ‘mean’. Specifies the way to reduce the bag. Default: ‘mean’
Variables: weight (Tensor) – the learnable weights of the module of shape (num_embeddings, embedding_dim)
 Inputs: input, offsets
 input (N or BxN): LongTensor containing the indices of the embeddings
 to extract. When input is 1D Tensor of shape N, an offsets Tensor is given, that contains the starting position of each new sequence in the minibatch.
 offsets (B or None): LongTensor containing the starting positions of
 each sample in a minibatch of variable length sequences. If input is 2D (BxN), then offsets does not need to be given, as the input is treated as a minibatch of fixed length sequences of length N each.
 Shape:
 Input: LongTensor N, N = number of embeddings to extract
 (or) LongTensor BxN, B = number of sequences in minibatch,
 N = number of embeddings per sequence
 Offsets: LongTensor B, B = number of bags. The values are the
 offsets in input for each bag, i.e. the cumsum of lengths. Offsets is not given if Input is 2D BxN Tensor, the input is considered to be of fixedlength sequences
 Output: (B, embedding_dim)
Examples:
>>> # an Embedding module containing 10 tensors of size 3 >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum') >>> # a batch of 2 samples of 4 indices each >>> input = Variable(torch.LongTensor([1,2,4,5,4,3,2,9])) >>> offsets = Variable(torch.LongTensor([0,4])) >>> embedding_sum(input, offsets) Variable containing: 0.7296 4.6926 0.3295 0.5186 0.5631 0.2792 [torch.FloatTensor of size 2x3]
Distance functions¶
CosineSimilarity¶

class
torch.nn.
CosineSimilarity
(dim=1, eps=1e08)[source]¶ Returns cosine similarity between x1 and x2, computed along dim.
\[\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}\]Parameters:  Shape:
 Input1: \((\ast_1, D, \ast_2)\) where D is at position dim
 Input2: \((\ast_1, D, \ast_2)\), same shape as the Input1
 Output: \((\ast_1, \ast_2)\)
Examples:
>>> input1 = autograd.Variable(torch.randn(100, 128)) >>> input2 = autograd.Variable(torch.randn(100, 128)) >>> cos = nn.CosineSimilarity(dim=1, eps=1e6) >>> output = cos(input1, input2) >>> print(output)
PairwiseDistance¶

class
torch.nn.
PairwiseDistance
(p=2, eps=1e06)[source]¶ Computes the batchwise pairwise distance between vectors v1,v2:
\[\Vert x \Vert _p := \left( \sum_{i=1}^n \vert x_i \vert ^ p \right) ^ {1/p}\]Parameters:  p (real) – the norm degree. Default: 2
 eps (float, optional) – Small value to avoid division by zero. Default: 1e6
 Shape:
 Input1: \((N, D)\) where D = vector dimension
 Input2: \((N, D)\), same shape as the Input1
 Output: \((N, 1)\)
Examples:
>>> pdist = nn.PairwiseDistance(p=2) >>> input1 = autograd.Variable(torch.randn(100, 128)) >>> input2 = autograd.Variable(torch.randn(100, 128)) >>> output = pdist(input1, input2)
Loss functions¶
L1Loss¶

class
torch.nn.
L1Loss
(size_average=True, reduce=True)[source]¶ Creates a criterion that measures the mean absolute value of the elementwise difference between input x and target y:
The loss can be described as:
\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left x_n  y_n \right,\]where \(N\) is the batch size. If reduce is
True
, then:\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if}\; \text{size_average} = \text{True},\\ \operatorname{sum}(L), & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}\]x and y arbitrary shapes with a total of n elements each.
The sum operation still operates over all the elements, and divides by n.
The division by n can be avoided if one sets the constructor argument size_average=False.
Parameters:  size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
False
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
 reduce (bool, optional) – By default, the losses are averaged or summed
for each minibatch. When reduce is
False
, the loss function returns a loss per batch element instead and ignores size_average. Default:True
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Target: \((N, *)\), same shape as the input
 Output: scalar. If reduce is
False
, then \((N, *)\), same shape as the input
Examples:
>>> loss = nn.L1Loss() >>> input = autograd.Variable(torch.randn(3, 5), requires_grad=True) >>> target = autograd.Variable(torch.randn(3, 5)) >>> output = loss(input, target) >>> output.backward()
 size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
MSELoss¶

class
torch.nn.
MSELoss
(size_average=True, reduce=True)[source]¶ Creates a criterion that measures the mean squared error between n elements in the input x and target y.
The loss can be described as:
\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left( x_n  y_n \right)^2,\]where \(N\) is the batch size. If reduce is
True
, then:\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if}\; \text{size_average} = \text{True},\\ \operatorname{sum}(L), & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}\]x and y arbitrary shapes with a total of n elements each.
The sum operation still operates over all the elements, and divides by n.
The division by n can be avoided if one sets the internal variable size_average to
False
.To get a batch of losses, a loss per batch element, set reduce to
False
. These losses are not averaged and are not affected by size_average.Parameters:  size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
False
, the losses are instead summed for each minibatch. Only applies when reduce isTrue
. Default:True
 reduce (bool, optional) – By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average. When reduce is
False
, returns a loss per batch element instead and ignores size_average. Default:True
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Target: \((N, *)\), same shape as the input
Examples:
>>> loss = nn.MSELoss() >>> input = autograd.Variable(torch.randn(3, 5), requires_grad=True) >>> target = autograd.Variable(torch.randn(3, 5)) >>> output = loss(input, target) >>> output.backward()
 size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
CrossEntropyLoss¶

class
torch.nn.
CrossEntropyLoss
(weight=None, size_average=True, ignore_index=100, reduce=True)[source]¶ This criterion combines LogSoftMax and NLLLoss in one single class.
It is useful when training a classification problem with C classes. If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.
The input is expected to contain scores for each class.
input has to be a 2D Tensor of size (minibatch, C).
This criterion expects a class index (0 to C1) as the target for each value of a 1D tensor of size minibatch
The loss can be described as:
loss(x, class) = log(exp(x[class]) / (\sum_j exp(x[j]))) = x[class] + log(\sum_j exp(x[j]))
or in the case of the weight argument being specified:
loss(x, class) = weight[class] * (x[class] + log(\sum_j exp(x[j])))
The losses are averaged across observations for each minibatch.
Parameters:  weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size “C”
 size_average (bool, optional) – By default, the losses are averaged over observations for each minibatch.
However, if the field size_average is set to
False
, the losses are instead summed for each minibatch. Ignored if reduce isFalse
.  ignore_index (int, optional) – Specifies a target value that is ignored
and does not contribute to the input gradient. When size_average is
True
, the loss is averaged over nonignored targets.  reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on size_average. When reduce
is
False
, returns a loss per batch element instead and ignores size_average. Default:True
 Shape:
 Input: \((N, C)\) where C = number of classes
 Target: \((N)\) where each value is 0 <= targets[i] <= C1
 Output: scalar. If reduce is
False
, then \((N)\) instead.
Examples:
>>> loss = nn.CrossEntropyLoss() >>> input = autograd.Variable(torch.randn(3, 5), requires_grad=True) >>> target = autograd.Variable(torch.LongTensor(3).random_(5)) >>> output = loss(input, target) >>> output.backward()
NLLLoss¶

class
torch.nn.
NLLLoss
(weight=None, size_average=True, ignore_index=100, reduce=True)[source]¶ The negative log likelihood loss. It is useful to train a classification problem with C classes.
If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.
The input given through a forward call is expected to contain logprobabilities of each class: input has to be a 2D Tensor of size (minibatch, C)
Obtaining logprobabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
The target that this loss expects is a class index (0 to C1, where C = number of classes)
The loss can be described as:
\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n =  x_{n,y_n} \cdot \mathbb{1}\{y_n \not= \text{ignore_index}\},\]where \(N\) is the batch size. If reduce is
True
, then\[\begin{split}\ell(x, y) = \begin{cases} \sum_{n=1}^N w_{y_n} l_n \Big/ \sum_{n=1}^N w_{y_n} \cdot \mathbb{1}\{y_n \not= \text{ignore_index}\}, & \text{if}\; \text{size_average} = \text{True},\\ \sum_{n=1}^N w_{y_n} l_n, & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}\]Parameters:  weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
 size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
False
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
 ignore_index (int, optional) – Specifies a target value that is ignored
and does not contribute to the input gradient. When size_average
is
True
, the loss is averaged over nonignored targets.  reduce (bool, optional) – By default, the losses are averaged or summed
for each minibatch. When reduce is
False
, the loss function returns a loss per batch element instead and ignores size_average. Default:True
 Shape:
 Input: \((N, C)\) where C = number of classes.
 In the case of Kdimensional loss where \(K >= 2\), then \((N, C, *)\) where * is K extra dimensions.
 Target: \((N)\) where each value is 0 <= targets[i] <= C1.
 In the case of Kdimensional loss, where \(K >= 2\), then \((N, C, *)\) where * is K extra dimensions.
 Output: scalar. If reduce is
False
, then \((N)\) instead.  In the case of Kdimensional loss and reduce is
False
, then \((N, C, *)\), the same size as the target.
 Output: scalar. If reduce is
Examples:
>>> m = nn.LogSoftmax() >>> loss = nn.NLLLoss() >>> # input is of size N x C = 3 x 5 >>> input = autograd.Variable(torch.randn(3, 5), requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = autograd.Variable(torch.LongTensor([1, 0, 4])) >>> output = loss(m(input), target) >>> output.backward()
PoissonNLLLoss¶

class
torch.nn.
PoissonNLLLoss
(log_input=True, full=False, size_average=True, eps=1e08, reduce=True)[source]¶ Negative log likelihood loss with Poisson distribution of target.
The loss can be described as:
target ~ Pois(input) loss(input, target) = input  target * log(input) + log(target!)
The last term can be omitted or approximised with Stirling formula. The approximation is used for target values more than 1. For targets less or equal to 1 zeros are added to the loss.
Parameters:  log_input (bool, optional) – if
True
the loss is computed as exp(input)  target * input, ifFalse
the loss is input  target * log(input+eps).  full (bool, optional) – whether to compute full loss, i. e. to add the Stirling approximation term target * log(target)  target + 0.5 * log(2 * pi * target).
 size_average (bool, optional) – By default, the losses are averaged over
observations for each minibatch. However, if the field size_average
is set to
False
, the losses are instead summed for each minibatch.  eps (float, optional) – Small value to avoid evaluation of log(0) when log_input==``False``. Default: 1e8
 reduce (bool, optional) – By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average. When reduce is
False
, returns a loss per batch element instead and ignores size_average. Default:True
Examples:
>>> loss = nn.PoissonNLLLoss() >>> log_input = autograd.Variable(torch.randn(5, 2), requires_grad=True) >>> target = autograd.Variable(torch.randn(5, 2)) >>> output = loss(log_input, target) >>> output.backward()
 log_input (bool, optional) – if
NLLLoss2d¶

class
torch.nn.
NLLLoss2d
(weight=None, size_average=True, ignore_index=100, reduce=True)[source]¶ This is negative log likehood loss, but for image inputs. It computes NLL loss perpixel.
Parameters:  weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a 1D Tensor having as many elements, as there are classes.
 size_average – By default, the losses are averaged over observations
for each minibatch. However, if the field size_average is set to
False
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
 reduce (bool, optional) – By default, the losses are averaged or summed
for each minibatch depending on size_average. When reduce is
False
, the loss function returns a loss per batch element instead and ignores size_average. Default:True
 Shape:
 Input: \((N, C, H, W)\) where C = number of classes
 Target: \((N, H, W)\) where each value is 0 <= targets[i] <= C1
 Output: scalar. If reduce is
False
, then \((N, H, W)\) instead.
Examples:
>>> m = nn.Conv2d(16, 32, (3, 3)).float() >>> loss = nn.NLLLoss2d() >>> # input is of size N x C x height x width >>> input = autograd.Variable(torch.randn(3, 16, 10, 10)) >>> # each element in target has to have 0 <= value < C >>> target = autograd.Variable(torch.LongTensor(3, 8, 8).random_(0, 4)) >>> output = loss(m(input), target) >>> output.backward()
KLDivLoss¶

class
torch.nn.
KLDivLoss
(size_average=True, reduce=True)[source]¶ The KullbackLeibler divergence Loss
KL divergence is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely sampled) continuous output distributions.
As with NLLLoss, the input given is expected to contain logprobabilities, however unlike ClassNLLLoss, input is not restricted to a 2D Tensor, because the criterion is applied elementwise.
This criterion expects a target Tensor of the same size as the input Tensor.
The loss can be described as:
\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = y_n \odot \left( \log y_n  x_n \right),\]where \(N\) is the batch size. If reduce is
True
, then:\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if}\; \text{size_average} = \text{True},\\ \operatorname{sum}(L), & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}\]By default, the losses are averaged for each minibatch over observations as well as over dimensions. However, if the field size_average is set to
False
, the losses are instead summed.Parameters:  (bool, optional (size_average) – By default, the losses are averaged
for each minibatch over observations as well as over
dimensions. However, if
False
the losses are instead summed.  reduce (bool, optional) – By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average. When reduce is
False
, returns a loss per batch element instead and ignores size_average. Default:True
 Shape:
 input: \((N, *)\) where * means, any number of additional dimensions
 target: \((N, *)\), same shape as the input
 output: scalar. If reduce is
True
, then \((N, *)\),  same shape as the input
 output: scalar. If reduce is
 (bool, optional (size_average) – By default, the losses are averaged
for each minibatch over observations as well as over
dimensions. However, if
BCELoss¶

class
torch.nn.
BCELoss
(weight=None, size_average=True)[source]¶ Creates a criterion that measures the Binary Cross Entropy between the target and the output:
The loss can be described as:
\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n =  w_n \left[ y_n \cdot \log x_n + (1  y_n) \cdot \log (1  x_n) \right],\]where \(N\) is the batch size. If reduce is
True
, then\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if}\; \text{size_average} = \text{True},\\ \operatorname{sum}(L), & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}\]This is used for measuring the error of a reconstruction in for example an autoencoder. Note that the targets y should be numbers between 0 and 1.
Parameters:  weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size “nbatch”.
 size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
False
, the losses are instead summed for each minibatch. Default:True
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Target: \((N, *)\), same shape as the input
Examples:
>>> m = nn.Sigmoid() >>> loss = nn.BCELoss() >>> input = autograd.Variable(torch.randn(3), requires_grad=True) >>> target = autograd.Variable(torch.FloatTensor(3).random_(2)) >>> output = loss(m(input), target) >>> output.backward()
BCEWithLogitsLoss¶

class
torch.nn.
BCEWithLogitsLoss
(weight=None, size_average=True)[source]¶ This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the logsumexp trick for numerical stability.
The loss can be described as:
\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n =  w_n \left[ t_n \cdot \log \sigma(x_n) + (1  t_n) \cdot \log (1  \sigma(x_n)) \right],\]where \(N\) is the batch size. If reduce is
True
, then\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if}\; \text{size_average} = \text{True},\\ \operatorname{sum}(L), & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}\]This is used for measuring the error of a reconstruction in for example an autoencoder. Note that the targets t[i] should be numbers between 0 and 1.
Parameters:  weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size “nbatch”.
 size_average – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
False
, the losses are instead summed for each minibatch. Default:True
MarginRankingLoss¶

class
torch.nn.
MarginRankingLoss
(margin=0, size_average=True)[source]¶ Creates a criterion that measures the loss given inputs x1, x2, two 1D minibatch Tensor`s, and a label 1D minibatch tensor `y with values (1 or 1).
If y == 1 then it assumed the first input should be ranked higher (have a larger value) than the second input, and viceversa for y == 1.
The loss function for each sample in the minibatch is:
loss(x, y) = max(0, y * (x1  x2) + margin)
if the internal variable size_average = True, the loss function averages the loss over the batch samples; if size_average = False, then the loss function sums over the batch samples. By default, size_average equals to
True
.
HingeEmbeddingLoss¶

class
torch.nn.
HingeEmbeddingLoss
(margin=1.0, size_average=True)[source]¶ Measures the loss given an input tensor x and a labels tensor y containing values (1 or 1). This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance as x, and is typically used for learning nonlinear embeddings or semisupervised learning:
The loss function for \(n\)th sample in the minibatch is:
\[\begin{split}l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, \Delta  x_n\}, & \text{if}\; y_n = 1, \end{cases}\end{split}\]and the total loss functions is
\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if}\; \text{size_average} = \text{True},\\ \operatorname{sum}(L), & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}\]where \(L = \{l_1,\dots,l_N\}^\top\).
x and y can be of arbitrary shapes with a total of n elements each. The sum operation operates over all the elements.
The division by n can be avoided if one sets the internal variable size_average=False.
The margin has a default value of 1, or can be set in the constructor.
MultiLabelMarginLoss¶

class
torch.nn.
MultiLabelMarginLoss
(size_average=True)[source]¶ Creates a criterion that optimizes a multiclass multiclassification hinge loss (marginbased loss) between input x (a 2D minibatch Tensor) and output y (which is a 2D Tensor of target class indices). For each sample in the minibatch:
loss(x, y) = sum_ij(max(0, 1  (x[y[j]]  x[i]))) / x.size(0)
where i == 0 to x.size(0), j == 0 to y.size(0), y[j] >= 0, and i != y[j] for all i and j.
y and x must have the same size.
The criterion only considers the first nonnegative y[j] targets.
This allows for different samples to have variable amounts of target classes
SmoothL1Loss¶

class
torch.nn.
SmoothL1Loss
(size_average=True, reduce=True)[source]¶ Creates a criterion that uses a squared term if the absolute elementwise error falls below 1 and an L1 term otherwise. It is less sensitive to outliers than the MSELoss and in some cases prevents exploding gradients (e.g. see “Fast RCNN” paper by Ross Girshick). Also known as the Huber loss:
{ 0.5 * (x_i  y_i)^2, if x_i  y_i < 1 loss(x, y) = 1/n \sum { { x_i  y_i  0.5, otherwise
x and y arbitrary shapes with a total of n elements each the sum operation still operates over all the elements, and divides by n.
The division by n can be avoided if one sets the internal variable size_average to
False
Parameters:  size_average (bool, optional) – By default, the losses are averaged
over all elements. However, if the field size_average is set to
False
, the losses are instead summed. Ignored when reduce isFalse
. Default:True
 reduce (bool, optional) – By default, the losses are averaged or summed
over elements. When reduce is
False
, the loss function returns a loss per element instead and ignores size_average. Default:True
 Shape:
 Input: \((N, *)\) where * means, any number of additional dimensions
 Target: \((N, *)\), same shape as the input
 Output: scalar. If reduce is
False
, then \((N, *)\), same shape as the input
 size_average (bool, optional) – By default, the losses are averaged
over all elements. However, if the field size_average is set to
SoftMarginLoss¶

class
torch.nn.
SoftMarginLoss
(size_average=True)[source]¶ Creates a criterion that optimizes a twoclass classification logistic loss between input x (a 2D minibatch Tensor) and target y (which is a tensor containing either 1 or 1).
loss(x, y) = sum_i (log(1 + exp(y[i]*x[i]))) / x.nelement()
The normalization by the number of elements in the input can be disabled by setting self.size_average to
False
.
MultiLabelSoftMarginLoss¶

class
torch.nn.
MultiLabelSoftMarginLoss
(weight=None, size_average=True)[source]¶ Creates a criterion that optimizes a multilabel oneversusall loss based on maxentropy, between input x (a 2D minibatch Tensor) and target y (a binary 2D Tensor). For each sample in the minibatch:
loss(x, y) =  sum_i (y[i] * log( 1 / (1 + exp(x[i])) ) + ( (1y[i]) * log(exp(x[i]) / (1 + exp(x[i])) ) )
where i == 0 to x.nElement()1, y[i] in {0,1}. y and x must have the same size.
CosineEmbeddingLoss¶

class
torch.nn.
CosineEmbeddingLoss
(margin=0, size_average=True)[source]¶ Creates a criterion that measures the loss given an input tensors x1, x2 and a Tensor label y with values 1 or 1. This is used for measuring whether two inputs are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semisupervised learning.
margin should be a number from 1 to 1, 0 to 0.5 is suggested. If margin is missing, the default value is 0.
The loss function for each sample is:
{ 1  cos(x1, x2), if y == 1 loss(x, y) = { { max(0, cos(x1, x2)  margin), if y == 1
If the internal variable size_average is equal to
True
, the loss function averages the loss over the batch samples; if size_average isFalse
, then the loss function sums over the batch samples. By default, size_average = True.
MultiMarginLoss¶

class
torch.nn.
MultiMarginLoss
(p=1, margin=1, weight=None, size_average=True)[source]¶ Creates a criterion that optimizes a multiclass classification hinge loss (marginbased loss) between input x (a 2D minibatch Tensor) and output y (which is a 1D tensor of target class indices, 0 <= y <= x.size(1)):
For each minibatch sample:
loss(x, y) = sum_i(max(0, (margin  x[y] + x[i]))^p) / x.size(0) where `i == 0` to `x.size(0)` and `i != y`.
Optionally, you can give nonequal weighting on the classes by passing a 1D weight tensor into the constructor.
The loss function then becomes:
loss(x, y) = sum_i(max(0, w[y] * (margin  x[y]  x[i]))^p) / x.size(0)By default, the losses are averaged over observations for each minibatch. However, if the field size_average is set to
False
, the losses are instead summed.
TripletMarginLoss¶

class
torch.nn.
TripletMarginLoss
(margin=1.0, p=2, eps=1e06, swap=False)[source]¶ Creates a criterion that measures the triplet loss given an input tensors x1, x2, x3 and a margin with a value greater than 0. This is used for measuring a relative similarity between samples. A triplet is composed by a, p and n: anchor, positive examples and negative example respectively. The shape of all input variables should be \((N, D)\).
The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al.
\[L(a, p, n) = \frac{1}{N} \left( \sum_{i=1}^N \max \{d(a_i, p_i)  d(a_i, n_i) + {\rm margin}, 0\} \right)\]where \(d(x_i, y_i) = \left\lVert {\bf x}_i  {\bf y}_i \right\rVert_p\).
Parameters:  anchor – anchor input tensor
 positive – positive input tensor
 negative – negative input tensor
 p – the norm degree. Default: 2
 Shape:
 Input: \((N, D)\) where D = vector dimension
 Output: \((N, 1)\)
>>> triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2) >>> input1 = autograd.Variable(torch.randn(100, 128)) >>> input2 = autograd.Variable(torch.randn(100, 128)) >>> input3 = autograd.Variable(torch.randn(100, 128)) >>> output = triplet_loss(input1, input2, input3) >>> output.backward()
Vision layers¶
PixelShuffle¶

class
torch.nn.
PixelShuffle
(upscale_factor)[source]¶ Rearranges elements in a Tensor of shape \((*, C * r^2, H, W]\) to a tensor of shape \((C, H * r, W * r)\).
This is useful for implementing efficient subpixel convolution with a stride of \(1/r\).
Look at the paper: RealTime Single Image and Video SuperResolution Using an Efficient SubPixel Convolutional Neural Network by Shi et. al (2016) for more details
Parameters: upscale_factor (int) – factor to increase spatial resolution by  Shape:
 Input: \((N, C * {upscale\_factor}^2, H, W)\)
 Output: \((N, C, H * {upscale\_factor}, W * {upscale\_factor})\)
Examples:
>>> ps = nn.PixelShuffle(3) >>> input = autograd.Variable(torch.Tensor(1, 9, 4, 4)) >>> output = ps(input) >>> print(output.size()) torch.Size([1, 1, 12, 12])
Upsample¶

class
torch.nn.
Upsample
(size=None, scale_factor=None, mode='nearest')[source]¶ Upsamples a given multichannel 1D (temporal), 2D (spatial) or 3D (volumetric) data.
The input data is assumed to be of the form minibatch x channels x [depth] x [height] x width. Hence, for spatial inputs, we expect a 4D Tensor and for volumetric inputs, we expect a 5D Tensor.
The algorithms available for upsampling are nearest neighbor and linear, bilinear and trilinear for 3D, 4D and 5D input Tensor, respectively.
One can either give a
scale_factor
or the target outputsize
to calculate the output size. (You cannot give both, as it is ambiguous)Parameters:  Shape:
 Input: \((N, C, W_{in})\), \((N, C, H_{in}, W_{in})\) or \((N, C, D_{in}, H_{in}, W_{in})\)
 Output: \((N, C, W_{out})\), \((N, C, H_{out}, W_{out})\) or \((N, C, D_{out}, H_{out}, W_{out})\) where \(D_{out} = floor(D_{in} * scale\_factor)\) or size[3] \(H_{out} = floor(H_{in} * scale\_factor)\) or size[2] \(W_{out} = floor(W_{in} * scale\_factor)\) or size[1]
Examples:
>>> inp Variable containing: (0 ,0 ,.,.) = 1 2 3 4 [torch.FloatTensor of size 1x1x2x2] >>> m = nn.Upsample(scale_factor=2, mode='bilinear') >>> m(inp) Variable containing: (0 ,0 ,.,.) = 1.0000 1.3333 1.6667 2.0000 1.6667 2.0000 2.3333 2.6667 2.3333 2.6667 3.0000 3.3333 3.0000 3.3333 3.6667 4.0000 [torch.FloatTensor of size 1x1x4x4] >>> inp Variable containing: (0 ,0 ,.,.) = 1 2 3 4 [torch.FloatTensor of size 1x1x2x2] >>> m = nn.Upsample(scale_factor=2, mode='nearest') >>> m(inp) Variable containing: (0 ,0 ,.,.) = 1 1 2 2 1 1 2 2 3 3 4 4 3 3 4 4 [torch.FloatTensor of size 1x1x4x4]
UpsamplingNearest2d¶

class
torch.nn.
UpsamplingNearest2d
(size=None, scale_factor=None)[source]¶ Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the
size
or thescale_factor
as it’s constructor argument.When size is given, it is the output size of the image (h, w).
Parameters:  Shape:
 Input: \((N, C, H_{in}, W_{in})\)
 Output: \((N, C, H_{out}, W_{out})\) where \(H_{out} = floor(H_{in} * scale\_factor)\) \(W_{out} = floor(W_{in} * scale\_factor)\)
Examples:
>>> inp Variable containing: (0 ,0 ,.,.) = 1 2 3 4 [torch.FloatTensor of size 1x1x2x2] >>> m = nn.UpsamplingNearest2d(scale_factor=2) >>> m(inp) Variable containing: (0 ,0 ,.,.) = 1 1 2 2 1 1 2 2 3 3 4 4 3 3 4 4 [torch.FloatTensor of size 1x1x4x4]
UpsamplingBilinear2d¶

class
torch.nn.
UpsamplingBilinear2d
(size=None, scale_factor=None)[source]¶ Applies a 2D bilinear upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the
size
or thescale_factor
as it’s constructor argument.When size is given, it is the output size of the image (h, w).
Parameters:  Shape:
 Input: \((N, C, H_{in}, W_{in})\)
 Output: \((N, C, H_{out}, W_{out})\) where \(H_{out} = floor(H_{in} * scale\_factor)\) \(W_{out} = floor(W_{in} * scale\_factor)\)
Examples:
>>> inp Variable containing: (0 ,0 ,.,.) = 1 2 3 4 [torch.FloatTensor of size 1x1x2x2] >>> m = nn.UpsamplingBilinear2d(scale_factor=2) >>> m(inp) Variable containing: (0 ,0 ,.,.) = 1.0000 1.3333 1.6667 2.0000 1.6667 2.0000 2.3333 2.6667 2.3333 2.6667 3.0000 3.3333 3.0000 3.3333 3.6667 4.0000 [torch.FloatTensor of size 1x1x4x4]
DataParallel layers (multiGPU, distributed)¶
DataParallel¶

class
torch.nn.
DataParallel
(module, device_ids=None, output_device=None, dim=0)[source]¶ Implements data parallelism at the module level.
This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module.
The batch size should be larger than the number of GPUs used. It should also be an integer multiple of the number of GPUs so that each chunk is the same size (so that each GPU processes the same number of samples).
See also: Use nn.DataParallel instead of multiprocessing
Arbitrary positional and keyword inputs are allowed to be passed into DataParallel EXCEPT Tensors. All variables will be scattered on dim specified (default 0). Primitive types will be broadcasted, but all other types will be a shallow copy and can be corrupted if written to in the model’s forward pass.
Warning
Forward and backwrad hooks defined on
module
and its submodules won’t be invoked anymore, unless the hooks are initialized in theforward()
method.Parameters:  module – module to be parallelized
 device_ids – CUDA devices (default: all devices)
 output_device – device location of output (default: device_ids[0])
Example:
>>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2]) >>> output = net(input_var)
DistributedDataParallel¶

class
torch.nn.parallel.
DistributedDataParallel
(module, device_ids=None, output_device=None, dim=0)[source]¶ Implements distributed data parallelism at the module level.
This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. The module is replicated on each machine and each device, and each such replica handles a portion of the input. During the backwards pass, gradients from each node are averaged.
The batch size should be larger than the number of GPUs used locally. It should also be an integer multiple of the number of GPUs so that each chunk is the same size (so that each GPU processes the same number of samples).
See also: Basics and Use nn.DataParallel instead of multiprocessing. The same constraints on input as in
torch.nn.DataParallel
apply.Creation of this class requires the distributed package to be already initialized in the process group mode (see
torch.distributed.init_process_group()
).Warning
This module works only with the
gloo
backend.Warning
Constructor, forward method, and differentiation of the output (or a function of the output of this module) is a distributed synchronization point. Take that into account in case different processes might be executing different code.
Warning
This module assumes all parameters are registered in the model by the time it is created. No parameters should be added nor removed later. Same applies to buffers.
Warning
This module assumes all buffers and gradients are dense.
Warning
This module doesn’t work with
torch.autograd.grad()
(i.e. it will only work if gradients are to be accumulated in.grad
attributes of parameters).Note
Parameters are never broadcast between processes. The module performs an allreduce step on gradients and assumes that they will be modified by the optimizer in all processes in the same way. Buffers (e.g. BatchNorm stats) are broadcast form the module in process of rank 0, to all other replicas in the system in every iteration.
Warning
Forward and backwrad hooks defined on
module
and its submodules won’t be invoked anymore, unless the hooks are initialized in theforward()
method.Parameters:  module – module to be parallelized
 device_ids – CUDA devices (default: all devices)
 output_device – device location of output (default: device_ids[0])
Example:
>>> torch.distributed.init_process_group(world_size=4, init_method='...') >>> net = torch.nn.DistributedDataParallel(model)
Utilities¶
clip_grad_norm¶

torch.nn.utils.
clip_grad_norm
(parameters, max_norm, norm_type=2)[source]¶ Clips gradient norm of an iterable of parameters.
The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified inplace.
Parameters: Returns: Total norm of the parameters (viewed as a single vector).
weight_norm¶

torch.nn.utils.
weight_norm
(module, name='weight', dim=0)[source]¶ Applies weight normalization to a parameter in the given module.
\[\mathbf{w} = g \dfrac{\mathbf{v}}{\\mathbf{v}\}\]Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This replaces the parameter specified by name (e.g. “weight”) with two parameters: one specifying the magnitude (e.g. “weight_g”) and one specifying the direction (e.g. “weight_v”). Weight normalization is implemented via a hook that recomputes the weight tensor from the magnitude and direction before every
forward()
call.By default, with dim=0, the norm is computed independently per output channel/plane. To compute a norm over the entire weight tensor, use dim=None.
See https://arxiv.org/abs/1602.07868
Parameters: Returns: The original module with the weight norm hook
Example:
>>> m = weight_norm(nn.Linear(20, 40), name='weight') Linear (20 > 40) >>> m.weight_g.size() torch.Size([40, 1]) >>> m.weight_v.size() torch.Size([40, 20])
remove_weight_norm¶
PackedSequence¶

torch.nn.utils.rnn.
PackedSequence
(_cls, data, batch_sizes)[source]¶ Holds the data and list of batch_sizes of a packed sequence.
All RNN modules accept packed sequences as inputs.
Note
Instances of this class should never be created manually. They are meant to be instantiated by functions like
pack_padded_sequence()
.Variables:
pack_padded_sequence¶

torch.nn.utils.rnn.
pack_padded_sequence
(input, lengths, batch_first=False)[source]¶ Packs a Variable containing padded sequences of variable length.
Input can be of size
TxBx*
where T is the length of the longest sequence (equal tolengths[0]
), B is the batch size, and * is any number of dimensions (including 0). Ifbatch_first
is TrueBxTx*
inputs are expected.The sequences should be sorted by length in a decreasing order, i.e.
input[:,0]
should be the longest sequence, andinput[:,B1]
the shortest one.Note
This function accept any input that has at least two dimensions. You can apply it to pack the labels, and use the output of the RNN with them to compute the loss directly. A Variable can be retrieved from a
PackedSequence
object by accessing its.data
attribute.Parameters: Returns: a
PackedSequence
object
pad_packed_sequence¶

torch.nn.utils.rnn.
pad_packed_sequence
(sequence, batch_first=False, padding_value=0.0)[source]¶ Pads a packed batch of variable length sequences.
It is an inverse operation to
pack_padded_sequence()
.The returned Variable’s data will be of size TxBx*, where T is the length of the longest sequence and B is the batch size. If
batch_first
is True, the data will be transposed into BxTx* format.Batch elements will be ordered decreasingly by their length.
Parameters:  sequence (PackedSequence) – batch to pad
 batch_first (bool, optional) – if
True
, the output will be in BxTx* format.  padding_value (float, optional) – values for padded elements
Returns: Tuple of Variable containing the padded sequence, and a list of lengths of each sequence in the batch.
torch.nn.functional¶
Convolution functions¶
conv1d¶

torch.nn.functional.
conv1d
(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)[source]¶ Applies a 1D convolution over an input signal composed of several input planes.
See
Conv1d
for details and output shape.Parameters:  input – input tensor of shape (minibatch x in_channels x iW)
 weight – filters of shape (out_channels x in_channels x kW)
 bias – optional bias of shape (out_channels). Default: None
 stride – the stride of the convolving kernel. Can be a single number or a oneelement tuple (sW,). Default: 1
 padding – implicit zero paddings on both sides of the input. Can be a single number or a oneelement tuple (padW,). Default: 0
 dilation – the spacing between kernel elements. Can be a single number or a oneelement tuple (dW,). Default: 1
 groups – split input into groups, in_channels should be divisible by the number of groups. Default: 1
Examples:
>>> filters = autograd.Variable(torch.randn(33, 16, 3)) >>> inputs = autograd.Variable(torch.randn(20, 16, 50)) >>> F.conv1d(inputs, filters)
conv2d¶

torch.nn.functional.
conv2d
(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)[source]¶ Applies a 2D convolution over an input image composed of several input planes.
See
Conv2d
for details and output shape.Parameters:  input – input tensor (minibatch x in_channels x iH x iW)
 weight – filters tensor (out_channels x in_channels/groups x kH x kW)
 bias – optional bias tensor (out_channels). Default: None
 stride – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1
 padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
 dilation – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1
 groups – split input into groups, in_channels should be divisible by the number of groups. Default: 1
Examples:
>>> # With square kernels and equal stride >>> filters = autograd.Variable(torch.randn(8,4,3,3)) >>> inputs = autograd.Variable(torch.randn(1,4,5,5)) >>> F.conv2d(inputs, filters, padding=1)
conv3d¶

torch.nn.functional.
conv3d
(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)[source]¶ Applies a 3D convolution over an input image composed of several input planes.
See
Conv3d
for details and output shape.Parameters:  input – input tensor of shape (minibatch x in_channels x iT x iH x iW)
 weight – filters tensor of shape (out_channels x in_channels x kT x kH x kW)
 bias – optional bias tensor of shape (out_channels). Default: None
 stride – the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1
 padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW). Default: 0
 dilation – the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1
 groups – split input into groups, in_channels should be divisible by the number of groups. Default: 1
Examples:
>>> filters = autograd.Variable(torch.randn(33, 16, 3, 3, 3)) >>> inputs = autograd.Variable(torch.randn(20, 16, 50, 10, 20)) >>> F.conv3d(inputs, filters)
conv_transpose1d¶

torch.nn.functional.
conv_transpose1d
(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)[source]¶ Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called “deconvolution”.
See
ConvTranspose1d
for details and output shape.Parameters:  input – input tensor of shape (minibatch x in_channels x iW)
 weight – filters of shape (in_channels x out_channels x kW)
 bias – optional bias of shape (out_channels). Default: None
 stride – the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1
 padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0
 output_padding – implicit zeropaddings of 0 <= padding < stride on both sides of the output. Can be a single number or a tuple (out_padW,). Default: 0
 groups – split input into groups, in_channels should be divisible by the number of groups. Default: 1
 dilation – the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1
conv_transpose2d¶

torch.nn.functional.
conv_transpose2d
(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)[source]¶ Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution”.
See
ConvTranspose2d
for details and output shape.Parameters:  input – input tensor of shape (minibatch x in_channels x iH x iW)
 weight – filters of shape (in_channels x out_channels x kH x kW)
 bias – optional bias of shape (out_channels). Default: None
 stride – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1
 padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
 output_padding – implicit zeropaddings of 0 <= padding < stride on both sides of the output. Can be a single number or a tuple (out_padH, out_padW). Default: 0
 groups – split input into groups, in_channels should be divisible by the number of groups. Default: 1
 dilation – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1
conv_transpose3d¶

torch.nn.functional.
conv_transpose3d
(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)[source]¶ Applies a 3D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution”
See
ConvTranspose3d
for details and output shape.Parameters:  input – input tensor of shape (minibatch x in_channels x iT x iH x iW)
 weight – filters of shape (in_channels x out_channels x kH x kW)
 bias – optional bias of shape (out_channels). Default: None
 stride – the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1
 padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW). Default: 0
 output_padding – implicit zeropaddings of 0 <= padding < stride on both sides of the output. Can be a single number or a tuple (out_padT, out_padH, out_padW). Default: 0
 groups – split input into groups, in_channels should be divisible by the number of groups. Default: 1
 dilation – the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1
Pooling functions¶
avg_pool1d¶

torch.nn.functional.
avg_pool1d
(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]¶ Applies a 1D average pooling over an input signal composed of several input planes.
See
AvgPool1d
for details and output shape.Parameters:  input – input tensor (minibatch x in_channels x iW)
 kernel_size – the size of the window. Can be a single number or a tuple (kW,)
 stride – the stride of the window. Can be a single number or a tuple
(sW,). Default:
kernel_size
 padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0
 ceil_mode – when True, will use ceil instead of floor to compute the
output shape. Default:
False
 count_include_pad – when True, will include the zeropadding in the
averaging calculation. Default:
True
Example
>>> # pool of square window of size=3, stride=2 >>> input = Variable(torch.Tensor([[[1,2,3,4,5,6,7]]])) >>> F.avg_pool1d(input, kernel_size=3, stride=2) Variable containing: (0 ,.,.) = 2 4 6 [torch.FloatTensor of size 1x1x3]
avg_pool2d¶

torch.nn.functional.
avg_pool2d
(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=False) → Variable¶ Applies 2D averagepooling operation in kh x kw regions by step size dh x dw steps. The number of output features is equal to the number of input planes.
See
AvgPool2d
for details and output shape.Parameters:  input – input tensor (minibatch x in_channels x iH x iW)
 kernel_size – size of the pooling region. Can be a single number or a tuple (kH x kW)
 stride – stride of the pooling operation. Can be a single number or a tuple (sH, sW). Default is equal to kernel size
 padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
 ceil_mode – when True, will use ceil instead of floor in the formula
to compute the output shape. Default:
False
 count_include_pad – when True, will include the zeropadding in th
averaging calculation. Default:
False
Warning
Default value for
count_include_pad
wasTrue
in versions before 0.3, and will be changed back toTrue
from 0.4.1 and forward.
avg_pool3d¶

torch.nn.functional.
avg_pool3d
(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=False) → Variable¶ Applies 3D averagepooling operation in kt x kh x kw regions by step size dt x dh x dw steps. The number of output features is equal to the number of input planes / dt.
See
AvgPool3d
for details and output shape.Parameters:  input – input tensor (minibatch x in_channels x iT x iH x iW)
 kernel_size – size of the pooling region. Can be a single number or a tuple (kT x kH x kW)
 stride – stride of the pooling operation. Can be a single number or a tuple (sT, sH, sW). Default is equal to kernel size
 padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW), Default: 0
 ceil_mode – when True, will use ceil instead of floor in the formula to compute the output shape
 count_include_pad – when True, will include the zeropadding in th
averaging calculation. Default:
False
Warning
Default value for
count_include_pad
wasTrue
in versions before 0.3, and will be changed back toTrue
from 0.4.1 and forward.
max_pool1d¶
max_pool2d¶
max_pool3d¶
max_unpool1d¶

torch.nn.functional.
max_unpool1d
(input, indices, kernel_size, stride=None, padding=0, output_size=None)[source]¶ Computes a partial inverse of
MaxPool1d
.See
MaxUnpool1d
for details.
max_unpool2d¶

torch.nn.functional.
max_unpool2d
(input, indices, kernel_size, stride=None, padding=0, output_size=None)[source]¶ Computes a partial inverse of
MaxPool2d
.See
MaxUnpool2d
for details.
max_unpool3d¶

torch.nn.functional.
max_unpool3d
(input, indices, kernel_size, stride=None, padding=0, output_size=None)[source]¶ Computes a partial inverse of
MaxPool3d
.See
MaxUnpool3d
for details.
lp_pool2d¶
adaptive_max_pool1d¶

torch.nn.functional.
adaptive_max_pool1d
(input, output_size, return_indices=False)[source]¶ Applies a 1D adaptive max pooling over an input signal composed of several input planes.
See
AdaptiveMaxPool1d
for details and output shape.Parameters:  output_size – the target output size (single integer)
 return_indices – whether to return pooling indices. Default:
False
adaptive_max_pool2d¶

torch.nn.functional.
adaptive_max_pool2d
(input, output_size, return_indices=False)[source]¶ Applies a 2D adaptive max pooling over an input signal composed of several input planes.
See
AdaptiveMaxPool2d
for details and output shape.Parameters:  output_size – the target output size (single integer or doubleinteger tuple)
 return_indices – whether to return pooling indices. Default:
False
adaptive_max_pool3d¶

torch.nn.functional.
adaptive_max_pool3d
(input, output_size, return_indices=False)[source]¶ Applies a 3D adaptive max pooling over an input signal composed of several input planes.
See
AdaptiveMaxPool3d
for details and output shape.Parameters:  output_size – the target output size (single integer or tripleinteger tuple)
 return_indices – whether to return pooling indices. Default:
False
adaptive_avg_pool1d¶

torch.nn.functional.
adaptive_avg_pool1d
(input, output_size)[source]¶ Applies a 1D adaptive average pooling over an input signal composed of several input planes.
See
AdaptiveAvgPool1d
for details and output shape.Parameters: output_size – the target output size (single integer)
adaptive_avg_pool2d¶

torch.nn.functional.
adaptive_avg_pool2d
(input, output_size)[source]¶ Applies a 2D adaptive average pooling over an input signal composed of several input planes.
See
AdaptiveAvgPool2d
for details and output shape.Parameters: output_size – the target output size (single integer or doubleinteger tuple)
adaptive_avg_pool3d¶

torch.nn.functional.
adaptive_avg_pool3d
(input, output_size)[source]¶ Applies a 3D adaptive average pooling over an input signal composed of several input planes.
See
AdaptiveAvgPool3d
for details and output shape.Parameters: output_size – the target output size (single integer or tripleinteger tuple)
Nonlinear activation functions¶
threshold¶
relu¶
hardtanh¶
relu6¶
elu¶
selu¶
leaky_relu¶
prelu¶
rrelu¶

torch.nn.functional.
rrelu
(input, lower=1./8, upper=1./3, training=False, inplace=False) → Variable¶
glu¶

torch.nn.functional.
glu
(input, dim=1) → Variable¶ The gated linear unit. Computes:
\[H = A \times \sigma(B)\]where input is split in half along dim to form A and B.
See Language Modeling with Gated Convolutional Networks.
Parameters:
logsigmoid¶

torch.nn.functional.
logsigmoid
(input) → Variable¶ Applies elementwise \(LogSigmoid(x) = log( 1 / (1 + exp(x_i)))\)
See
LogSigmoid
for more details.
hardshrink¶

torch.nn.functional.
hardshrink
(input, lambd=0.5) → Variable¶ Applies the hard shrinkage function elementwise
See
Hardshrink
for more details.
tanhshrink¶

torch.nn.functional.
tanhshrink
(input) → Variable[source]¶ Applies elementwise, \(Tanhshrink(x) = x  Tanh(x)\)
See
Tanhshrink
for more details.
softsign¶
softmin¶
softmax¶

torch.nn.functional.
softmax
(input, dim=None, _stacklevel=3)[source]¶ Applies a softmax function.
Softmax is defined as:
\(softmax(x) = \frac{exp(x_i)}{\sum_j exp(x_j)}\)
It is applied to all slices along dim, and will rescale them so that the elements lie in the range (0, 1) and sum to 1.
See
Softmax
for more details.Parameters: Note
This function doesn’t work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use log_softmax instead (it’s faster and has better numerical properties).
softshrink¶

torch.nn.functional.
softshrink
(input, lambd=0.5) → Variable[source]¶ Applies the soft shrinkage function elementwise
See
Softshrink
for more details.
log_softmax¶

torch.nn.functional.
log_softmax
(input, dim=None, _stacklevel=3)[source]¶ Applies a softmax followed by a logarithm.
While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly.
See
LogSoftmax
for more details.Parameters:
tanh¶
Normalization functions¶
batch_norm¶
normalize¶

torch.nn.functional.
normalize
(input, p=2, dim=1, eps=1e12)[source]¶ Performs \(L_p\) normalization of inputs over specified dimension.
Does:
\[v = \frac{v}{\max(\lVert v \rVert_p, \epsilon)}\]for each subtensor v over dimension dim of input. Each subtensor is flattened into a vector, i.e. \(\lVert v \rVert_p\) is not a matrix norm.
With default arguments normalizes over the second dimension with Euclidean norm.
Parameters:
Linear functions¶
linear¶

torch.nn.functional.
linear
(input, weight, bias=None)[source]¶ Applies a linear transformation to the incoming data: \(y = xA^T + b\).
 Shape:
 Input: \((N, *, in\_features)\) where * means any number of additional dimensions
 Weight: \((out\_features, in\_features)\)
 Bias: \((out\_features)\)
 Output: \((N, *, out\_features)\)
Dropout functions¶
alpha_dropout¶

torch.nn.functional.
alpha_dropout
(input, p=0.5, training=False)[source]¶ Applies alpha dropout to the input.
See
AlphaDropout
for details.Parameters:
Distance functions¶
pairwise_distance¶

torch.nn.functional.
pairwise_distance
(x1, x2, p=2, eps=1e06)[source]¶ Computes the batchwise pairwise distance between vectors v1,v2:
\[\Vert x \Vert _p := \left( \sum_{i=1}^n \vert x_i \vert ^ p \right) ^ {1/p}\]Parameters:  x1 – first input tensor
 x2 – second input tensor
 p – the norm degree. Default: 2
 eps (float, optional) – Small value to avoid division by zero. Default: 1e6
 Shape:
 Input: \((N, D)\) where D = vector dimension
 Output: \((N, 1)\)
Example:
>>> input1 = autograd.Variable(torch.randn(100, 128)) >>> input2 = autograd.Variable(torch.randn(100, 128)) >>> output = F.pairwise_distance(input1, input2, p=2) >>> output.backward()
cosine_similarity¶

torch.nn.functional.
cosine_similarity
(x1, x2, dim=1, eps=1e08)[source]¶ Returns cosine similarity between x1 and x2, computed along dim.
\[\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}\]Parameters:  Shape:
 Input: \((\ast_1, D, \ast_2)\) where D is at position dim.
 Output: \((\ast_1, \ast_2)\) where 1 is at position dim.
Example:
>>> input1 = autograd.Variable(torch.randn(100, 128)) >>> input2 = autograd.Variable(torch.randn(100, 128)) >>> output = F.cosine_similarity(input1, input2) >>> print(output)
Loss functions¶
binary_cross_entropy¶

torch.nn.functional.
binary_cross_entropy
(input, target, weight=None, size_average=True)[source]¶ Function that measures the Binary Cross Entropy between the target and the output.
See
BCELoss
for details.Parameters:  input – Variable of arbitrary shape
 target – Variable of the same shape as input
 weight (Variable, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape
 size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
sizeAverage is set to False, the losses are instead summed
for each minibatch. Default:
True
Examples:
>>> input = autograd.Variable(torch.randn(3), requires_grad=True) >>> target = autograd.Variable(torch.LongTensor(3).random_(2)) >>> loss = F.binary_cross_entropy(F.sigmoid(input), target) >>> loss.backward()
poisson_nll_loss¶

torch.nn.functional.
poisson_nll_loss
(input, target, log_input=True, full=False, size_average=True, eps=1e08, reduce=True)[source]¶ Poisson negative log likelihood loss.
See
PoissonNLLLoss
for details.Parameters:  input – expectation of underlying Poisson distribution.
 target – random sample \(target \sim Pois(input)\).
 log_input – if
True
the loss is computed as exp(input)  target * input, ifFalse
then loss is input  target * log(input+eps). Default:True
 full – whether to compute full loss, i. e. to add the Stirling
approximation term. Default:
False
target * log(target)  target + 0.5 * log(2 * pi * target).  size_average – By default, the losses are averaged over observations for
each minibatch. However, if the field sizeAverage is set to False,
the losses are instead summed for each minibatch. Default:
True
 eps (float, optional) – Small value to avoid evaluation of log(0) when log_input=False. Default: 1e8
 reduce (bool, optional) – By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average. When reduce is
False
, returns a loss per batch element instead and ignores size_average. Default:True
cosine_embedding_loss¶

torch.nn.functional.
cosine_embedding_loss
(input1, input2, target, margin=0, size_average=True) → Variable[source]¶ See
CosineEmbeddingLoss
for details.
cross_entropy¶

torch.nn.functional.
cross_entropy
(input, target, weight=None, size_average=True, ignore_index=100, reduce=True)[source]¶ This criterion combines log_softmax and nll_loss in a single function.
See
CrossEntropyLoss
for details.Parameters:  input – Variable \((N, C)\) where C = number of classes
 target – Variable \((N)\) where each value is 0 <= targets[i] <= C1
 weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
 size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
sizeAverage is set to False, the losses are instead summed
for each minibatch. Ignored if reduce is False. Default:
True
 ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over nonignored targets. Default: 100
 reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on size_average. When reduce
is False, returns a loss per batch element instead and ignores
size_average. Default:
True
Examples:
>>> input = autograd.Variable(torch.randn(3, 5), requires_grad=True) >>> target = autograd.Variable(torch.LongTensor(3).random_(5)) >>> loss = F.cross_entropy(input, target) >>> loss.backward()
hinge_embedding_loss¶

torch.nn.functional.
hinge_embedding_loss
(input, target, margin=1.0, size_average=True) → Variable[source]¶ See
HingeEmbeddingLoss
for details.
kl_div¶

torch.nn.functional.
kl_div
(input, target, size_average=True) → Variable¶ The KullbackLeibler divergence Loss.
See
KLDivLoss
for details.Parameters:  input – Variable of arbitrary shape
 target – Variable of the same shape as input
 size_average – if
True
the output is divided by the number of elements in input tensor. Default:True
 reduce (bool, optional) – By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average. When reduce is False, returns a loss per batch
element instead and ignores size_average. Default:
True
l1_loss¶
mse_loss¶
margin_ranking_loss¶

torch.nn.functional.
margin_ranking_loss
(input1, input2, target, margin=0, size_average=True) → Variable[source]¶ See
MarginRankingLoss
for details.
multilabel_margin_loss¶

torch.nn.functional.
multilabel_margin_loss
(input, target, size_average=True) → Variable¶ See
MultiLabelMarginLoss
for details.
multilabel_soft_margin_loss¶

torch.nn.functional.
multilabel_soft_margin_loss
(input, target, weight=None, size_average=True) → Variable[source]¶ See
MultiLabelSoftMarginLoss
for details.
multi_margin_loss¶

torch.nn.functional.
multi_margin_loss
(input, target, p=1, margin=1, weight=None, size_average=True) → Variable[source]¶ See
MultiMarginLoss
for details.
nll_loss¶

torch.nn.functional.
nll_loss
(input, target, weight=None, size_average=True, ignore_index=100, reduce=True)[source]¶ The negative log likelihood loss.
See
NLLLoss
for details.Parameters:  input – \((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\) where \(K > 1\) in the case of Kdimensional loss.
 target – \((N)\) where each value is 0 <= targets[i] <= C1, or \((N, C, d_1, d_2, ..., d_K)\) where \(K >= 1\) for Kdimensional loss.
 weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
 size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. If size_average
is False, the losses are summed for each minibatch. Default:
True
 ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When size_average is True, the loss is averaged over nonignored targets. Default: 100
Example:
>>> # input is of size N x C = 3 x 5 >>> input = autograd.Variable(torch.randn(3, 5)) >>> # each element in target has to have 0 <= value < C >>> target = autograd.Variable(torch.LongTensor([1, 0, 4])) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward()
binary_cross_entropy_with_logits¶

torch.nn.functional.
binary_cross_entropy_with_logits
(input, target, weight=None, size_average=True)[source]¶ Function that measures Binary Cross Entropy between target and output logits.
See
BCEWithLogitsLoss
for details.Parameters:  input – Variable of arbitrary shape
 target – Variable of the same shape as input
 weight (Variable, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape
 size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
sizeAverage is set to False, the losses are instead summed
for each minibatch. Default:
True
Examples:
>>> input = autograd.Variable(torch.randn(3), requires_grad=True) >>> target = autograd.Variable(torch.FloatTensor(3).random_(2)) >>> loss = F.binary_cross_entropy_with_logits(input, target) >>> loss.backward()
smooth_l1_loss¶

torch.nn.functional.
smooth_l1_loss
(input, target, size_average=True) → Variable¶ Function that uses a squared term if the absolute elementwise error falls below 1 and an L1 term otherwise.
See
SmoothL1Loss
for details.
soft_margin_loss¶

torch.nn.functional.
soft_margin_loss
(input, target, size_average=True) → Variable¶ See
SoftMarginLoss
for details.
triplet_margin_loss¶

torch.nn.functional.
triplet_margin_loss
(anchor, positive, negative, margin=1.0, p=2, eps=1e06, swap=False)[source]¶ Creates a criterion that measures the triplet loss given an input tensors x1, x2, x3 and a margin with a value greater than 0. This is used for measuring a relative similarity between samples. A triplet is composed by a, p and n: anchor, positive examples and negative example respectively. The shape of all input variables should be \((N, D)\).
The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al.
\[L(a, p, n) = \frac{1}{N} \left( \sum_{i=1}^N \max \{d(a_i, p_i)  d(a_i, n_i) + {\rm margin}, 0\} \right)\]where \(d(x_i, y_i) = \left\lVert {\bf x}_i  {\bf y}_i \right\rVert_p\).
Parameters:  anchor – anchor input tensor
 positive – positive input tensor
 negative – negative input tensor
 margin – the margin value. Default: 1
 p – the norm degree. Default: 2
 eps – small epsilon value to avoid numerical issues. Default: 1e6
 swap – compute distance swap. Default:
False
 Shape:
 Input: \((N, D)\) where D = vector dimension
 Output: \((N, 1)\)
Example:
>>> input1 = autograd.Variable(torch.randn(100, 128)) >>> input2 = autograd.Variable(torch.randn(100, 128)) >>> input3 = autograd.Variable(torch.randn(100, 128)) >>> output = F.triplet_margin_loss(input1, input2, input3, p=2) >>> output.backward()
Vision functions¶
pixel_shuffle¶

torch.nn.functional.
pixel_shuffle
(input, upscale_factor)[source]¶ Rearranges elements in a tensor of shape
[*, C*r^2, H, W]
to a tensor of shape[C, H*r, W*r]
.See
PixelShuffle
for details.Parameters: Examples:
>>> ps = nn.PixelShuffle(3) >>> input = autograd.Variable(torch.Tensor(1, 9, 4, 4)) >>> output = ps(input) >>> print(output.size()) torch.Size([1, 1, 12, 12])
pad¶

torch.nn.functional.
pad
(input, pad, mode='constant', value=0)[source]¶ Pads tensor.
 Nd constant padding: The number of dimensions to pad is
 len(padding) // 2 and the dimensions that gets padded begins with the last dimension and moves forward. See below for examples.
 1D, 2D and 3D “reflect”/”replicate” padding:
 1D: 3D input with padding in form (pad_l, pad_r) 2D: 4D input tensor pad should be in form (pad_l, pad_r, pad_t, pad_b ). 3D: 5D pad (pleft, pright, ptop, pbottom, pfront, pback). No “reflect” implementation
Parameters: Examples:
>>> t4d = torch.Tensor(3, 3, 4, 2) >>> p1d = (1, 1) # pad last dim by 1 on each side >>> out = F.pad(t4d, p1d, "constant", 0) >>> print(out.data.size()) torch.Size([3, 3, 4, 4]) >>> p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2) >>> out = F.pad(t4d, p2d, "constant", 0) >>> print(out.data.size()) torch.Size([3, 3, 8, 4]) >>> t4d = torch.Tensor(3, 3, 4, 2) >>> p3d = (0, 1, 2, 1, 3, 3) # pad by (0, 1), (2, 1), and (3, 3) >>> out = F.pad(t4d, p3d, "constant", 0) >>> print(out.data.size()) torch.Size([3, 9, 7, 3])
upsample¶

torch.nn.functional.
upsample
(input, size=None, scale_factor=None, mode='nearest')[source]¶ Upsamples the input to either the given
size
or the givenscale_factor
The algorithm used for upsampling is determined by
mode
.Currently temporal, spatial and volumetric upsampling are supported, i.e. expected inputs are 3D, 4D or 5D in shape.
The input dimensions are interpreted in the form: minibatch x channels x [depth] x [height] x width
The modes available for upsampling are: nearest, linear (3Donly), bilinear (4Donly), trilinear (5Donly)
Parameters:  input (Variable) – input
 size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) – output spatial size.
 scale_factor (int) – multiplier for spatial size. Has to be an integer.
 mode (string) – algorithm used for upsampling: ‘nearest’  ‘linear’  ‘bilinear’  ‘trilinear’. Default: ‘nearest’
upsample_nearest¶

torch.nn.functional.
upsample_nearest
(input, size=None, scale_factor=None)[source]¶ Upsamples the input, using nearest neighbours’ pixel values.
Note:: This function is deprecated. Use nn.functional.upsample instead
Currently spatial and volumetric upsampling are supported (i.e. expected inputs are 4 or 5 dimensional).
Parameters:
upsample_bilinear¶

torch.nn.functional.
upsample_bilinear
(input, size=None, scale_factor=None)[source]¶ Upscales the input, using bilinear upsampling.
Note:: This function is deprecated. Use nn.functional.upsample instead
Expected inputs are spatial (4 dimensional). Use upsample_trilinear fo volumetric (5 dimensional) inputs.
Parameters:
grid_sample¶

torch.nn.functional.
grid_sample
(input, grid, mode='bilinear', padding_mode='zeros')[source]¶ Given an
input
and a flowfieldgrid
, computes the output using input pixel locations from the grid.Uses bilinear interpolation to sample the input pixels. Currently, only spatial (4 dimensional) inputs are supported.
For each output location,
grid
has x and y input pixel locations which are used to compute output.grid
has values in the range of [1, 1]. This is because the pixel locations are normalized by the input height and width. For example, values: x: 1, y: 1 is the lefttop pixel of the input
 values: x: 1, y: 1 is the rightbottom pixel of the input
If
grid
has values outside the range of [1, 1], those locations are handled as defined by padding_mode. Options are zeros or border, defining those locations to use 0 or image border values as contribution to the bilinear interpolation.Note
This function is used in building Spatial Transformer Networks
Parameters: Returns: output Tensor
Return type: output (Variable)
affine_grid¶

torch.nn.functional.
affine_grid
(theta, size)[source]¶ Generates a 2d flow field, given a batch of affine matrices
theta
Generally used in conjunction withgrid_sample()
to implement Spatial Transformer Networks.Parameters:  theta (Variable) – input batch of affine matrices (N x 2 x 3)
 size (torch.Size) – the target output image size (N x C x H x W) Example: torch.Size((32, 3, 24, 24))
Returns: output Tensor of size (N x H x W x 2)
Return type: output (Variable)
torch.nn.init¶

torch.nn.init.
calculate_gain
(nonlinearity, param=None)[source]¶ Return the recommended gain value for the given nonlinearity function. The values are as follows:
nonlinearity gain linear \(1\) conv{1,2,3}d \(1\) sigmoid \(1\) tanh \(5 / 3\) relu \(\sqrt{2}\) leaky_relu \(\sqrt{2 / (1 + negative\_slope^2)}\) Parameters:  nonlinearity – the nonlinear function (nn.functional name)
 param – optional parameter for the nonlinear function
Examples
>>> gain = nn.init.calculate_gain('leaky_relu')

torch.nn.init.
uniform
(tensor, a=0, b=1)[source]¶ Fills the input Tensor or Variable with values drawn from the uniform distribution \(U(a, b)\).
Parameters:  tensor – an ndimensional torch.Tensor or autograd.Variable
 a – the lower bound of the uniform distribution
 b – the upper bound of the uniform distribution
Examples
>>> w = torch.Tensor(3, 5) >>> nn.init.uniform(w)

torch.nn.init.
normal
(tensor, mean=0, std=1)[source]¶ Fills the input Tensor or Variable with values drawn from the normal distribution \(N(mean, std)\).
Parameters:  tensor – an ndimensional torch.Tensor or autograd.Variable
 mean – the mean of the normal distribution
 std – the standard deviation of the normal distribution
Examples
>>> w = torch.Tensor(3, 5) >>> nn.init.normal(w)

torch.nn.init.
constant
(tensor, val)[source]¶ Fills the input Tensor or Variable with the value val.
Parameters:  tensor – an ndimensional torch.Tensor or autograd.Variable
 val – the value to fill the tensor with
Examples
>>> w = torch.Tensor(3, 5) >>> nn.init.constant(w)

torch.nn.init.
eye
(tensor)[source]¶ Fills the 2dimensional input Tensor or Variable with the identity matrix. Preserves the identity of the inputs in Linear layers, where as many inputs are preserved as possible.
Parameters: tensor – a 2dimensional torch.Tensor or autograd.Variable Examples
>>> w = torch.Tensor(3, 5) >>> nn.init.eye(w)

torch.nn.init.
dirac
(tensor)[source]¶ Fills the {3, 4, 5}dimensional input Tensor or Variable with the Dirac delta function. Preserves the identity of the inputs in Convolutional layers, where as many input channels are preserved as possible.
Parameters: tensor – a {3, 4, 5}dimensional torch.Tensor or autograd.Variable Examples
>>> w = torch.Tensor(3, 16, 5, 5) >>> nn.init.dirac(w)

torch.nn.init.
xavier_uniform
(tensor, gain=1)[source]¶ Fills the input Tensor or Variable with values according to the method described in “Understanding the difficulty of training deep feedforward neural networks”  Glorot, X. & Bengio, Y. (2010), using a uniform distribution. The resulting tensor will have values sampled from \(U(a, a)\) where \(a = gain \times \sqrt{2 / (fan\_in + fan\_out)} \times \sqrt{3}\). Also known as Glorot initialisation.
Parameters:  tensor – an ndimensional torch.Tensor or autograd.Variable
 gain – an optional scaling factor
Examples
>>> w = torch.Tensor(3, 5) >>> nn.init.xavier_uniform(w, gain=nn.init.calculate_gain('relu'))

torch.nn.init.
xavier_normal
(tensor, gain=1)[source]¶ Fills the input Tensor or Variable with values according to the method described in “Understanding the difficulty of training deep feedforward neural networks”  Glorot, X. & Bengio, Y. (2010), using a normal distribution. The resulting tensor will have values sampled from \(N(0, std)\) where \(std = gain \times \sqrt{2 / (fan\_in + fan\_out)}\). Also known as Glorot initialisation.
Parameters:  tensor – an ndimensional torch.Tensor or autograd.Variable
 gain – an optional scaling factor
Examples
>>> w = torch.Tensor(3, 5) >>> nn.init.xavier_normal(w)

torch.nn.init.
kaiming_uniform
(tensor, a=0, mode='fan_in')[source]¶ Fills the input Tensor or Variable with values according to the method described in “Delving deep into rectifiers: Surpassing humanlevel performance on ImageNet classification”  He, K. et al. (2015), using a uniform distribution. The resulting tensor will have values sampled from \(U(bound, bound)\) where \(bound = \sqrt{2 / ((1 + a^2) \times fan\_in)} \times \sqrt{3}\). Also known as He initialisation.
Parameters:  tensor – an ndimensional torch.Tensor or autograd.Variable
 a – the negative slope of the rectifier used after this layer (0 for ReLU by default)
 mode – either ‘fan_in’ (default) or ‘fan_out’. Choosing fan_in preserves the magnitude of the variance of the weights in the forward pass. Choosing fan_out preserves the magnitudes in the backwards pass.
Examples
>>> w = torch.Tensor(3, 5) >>> nn.init.kaiming_uniform(w, mode='fan_in')

torch.nn.init.
kaiming_normal
(tensor, a=0, mode='fan_in')[source]¶ Fills the input Tensor or Variable with values according to the method described in “Delving deep into rectifiers: Surpassing humanlevel performance on ImageNet classification”  He, K. et al. (2015), using a normal distribution. The resulting tensor will have values sampled from \(N(0, std)\) where \(std = \sqrt{2 / ((1 + a^2) \times fan\_in)}\). Also known as He initialisation.
Parameters:  tensor – an ndimensional torch.Tensor or autograd.Variable
 a – the negative slope of the rectifier used after this layer (0 for ReLU by default)
 mode – either ‘fan_in’ (default) or ‘fan_out’. Choosing fan_in preserves the magnitude of the variance of the weights in the forward pass. Choosing fan_out preserves the magnitudes in the backwards pass.
Examples
>>> w = torch.Tensor(3, 5) >>> nn.init.kaiming_normal(w, mode='fan_out')

torch.nn.init.
orthogonal
(tensor, gain=1)[source]¶ Fills the input Tensor or Variable with a (semi) orthogonal matrix, as described in “Exact solutions to the nonlinear dynamics of learning in deep linear neural networks”  Saxe, A. et al. (2013). The input tensor must have at least 2 dimensions, and for tensors with more than 2 dimensions the trailing dimensions are flattened.
Parameters:  tensor – an ndimensional torch.Tensor or autograd.Variable, where n >= 2
 gain – optional scaling factor
Examples
>>> w = torch.Tensor(3, 5) >>> nn.init.orthogonal(w)

torch.nn.init.
sparse
(tensor, sparsity, std=0.01)[source]¶ Fills the 2D input Tensor or Variable as a sparse matrix, where the nonzero elements will be drawn from the normal distribution \(N(0, 0.01)\), as described in “Deep learning via Hessianfree optimization”  Martens, J. (2010).
Parameters:  tensor – an ndimensional torch.Tensor or autograd.Variable
 sparsity – The fraction of elements in each column to be set to zero
 std – the standard deviation of the normal distribution used to generate
 nonzero values (the) –
Examples
>>> w = torch.Tensor(3, 5) >>> nn.init.sparse(w, sparsity=0.1)