torch.nn¶
Parameters¶
-
class
torch.nn.
Parameter
[source]¶ A kind of Tensor that is to be considered a module parameter.
Parameters are
Tensor
subclasses, that have a very special property when used withModule
s - when they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. inparameters()
iterator. Assigning a Tensor doesn’t have such effect. This is because one might want to cache some temporary state, like last hidden state of the RNN, in the model. If there was no such class asParameter
, these temporaries would get registered too.Parameters: - data (Tensor) – parameter tensor.
- requires_grad (bool, optional) – if the parameter requires gradient. See Excluding subgraphs from backward for more details. Default: True
Containers¶
Module¶
-
class
torch.nn.
Module
[source]¶ Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call .cuda(), etc.
-
add_module
(name, module)[source]¶ Adds a child module to the current module.
The module can be accessed as an attribute using the given name.
Parameters:
-
apply
(fn)[source]¶ Applies
fn
recursively to every submodule (as returned by.children()
) as well as self. Typical use includes initializing the parameters of a model (see also torch-nn-init).Parameters: fn ( Module
-> None) – function to be applied to each submoduleReturns: self Return type: Module Example:
>>> def init_weights(m): print(m) if type(m) == nn.Linear: m.weight.data.fill_(1.0) print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )
-
children
()[source]¶ Returns an iterator over immediate children modules.
Yields: Module – a child module
-
cuda
(device=None)[source]¶ Moves all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
Parameters: device (int, optional) – if specified, all parameters will be copied to that device Returns: self Return type: Module
-
double
()[source]¶ Casts all floating point parameters and buffers to
double
datatype.Returns: self Return type: Module
-
dump_patches
= False¶ This allows better BC support for
load_state_dict()
. Instate_dict()
, the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys follow the naming convention of state dict. See_load_from_state_dict
on how to use this information in loading.If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.
-
eval
()[source]¶ Sets the module in evaluation mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.
-
extra_repr
()[source]¶ Set the extra representation of the module
To print customized extra information, you should reimplement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
float
()[source]¶ Casts all floating point parameters and buffers to float datatype.
Returns: self Return type: Module
-
forward
(*input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
half
()[source]¶ Casts all floating point parameters and buffers to
half
datatype.Returns: self Return type: Module
-
load_state_dict
(state_dict, strict=True)[source]¶ Copies parameters and buffers from
state_dict
into this module and its descendants. Ifstrict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.Parameters: - state_dict (dict) – a dict containing parameters and persistent buffers.
- strict (bool, optional) – whether to strictly enforce that the keys
in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
-
modules
()[source]¶ Returns an iterator over all modules in the network.
Yields: Module – a module in the network Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): print(idx, '->', m) 0 -> Sequential ( (0): Linear (2 -> 2) (1): Linear (2 -> 2) ) 1 -> Linear (2 -> 2)
-
named_children
()[source]¶ Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
Yields: (string, Module) – Tuple containing a name and child module Example:
>>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module)
-
named_modules
(memo=None, prefix='')[source]¶ Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
Yields: (string, Module) – Tuple of name and module Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): print(idx, '->', m) 0 -> ('', Sequential ( (0): Linear (2 -> 2) (1): Linear (2 -> 2) )) 1 -> ('0', Linear (2 -> 2))
-
named_parameters
(memo=None, prefix='')[source]¶ Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself
Yields: (string, Parameter) – Tuple containing the name and parameter Example:
>>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())
-
parameters
()[source]¶ Returns an iterator over module parameters.
This is typically passed to an optimizer.
Yields: Parameter – module parameter Example:
>>> for param in model.parameters(): >>> print(type(param.data), param.size()) <class 'torch.FloatTensor'> (20L,) <class 'torch.FloatTensor'> (20L, 1L, 5L, 5L)
-
register_backward_hook
(hook)[source]¶ Registers a backward hook on the module.
The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> Tensor or None
The
grad_input
andgrad_output
may be tuples if the module has multiple inputs or outputs. The hook should not modify its arguments, but it can optionally return a new gradient with respect to input that will be used in place ofgrad_input
in subsequent computations.Returns: a handle that can be used to remove the added hook by calling handle.remove()
Return type: torch.utils.hooks.RemovableHandle
-
register_buffer
(name, tensor)[source]¶ Adds a persistent buffer to the module.
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s
running_mean
is not a parameter, but is part of the persistent state.Buffers can be accessed as attributes using given names.
Parameters: Example:
>>> self.register_buffer('running_mean', torch.zeros(num_features))
-
register_forward_hook
(hook)[source]¶ Registers a forward hook on the module.
The hook will be called every time after
forward()
has computed an output. It should have the following signature:hook(module, input, output) -> None
The hook should not modify the input or output.
Returns: a handle that can be used to remove the added hook by calling handle.remove()
Return type: torch.utils.hooks.RemovableHandle
-
register_forward_pre_hook
(hook)[source]¶ Registers a forward pre-hook on the module.
The hook will be called every time before
forward()
is invoked. It should have the following signature:hook(module, input) -> None
The hook should not modify the input.
Returns: a handle that can be used to remove the added hook by calling handle.remove()
Return type: torch.utils.hooks.RemovableHandle
-
register_parameter
(name, param)[source]¶ Adds a parameter to the module.
The parameter can be accessed as an attribute using given name.
Parameters:
-
state_dict
(destination=None, prefix='', keep_vars=False)[source]¶ Returns a dictionary containing a whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names.
Returns: a dictionary containing a whole state of the module Return type: dict Example:
>>> module.state_dict().keys() ['bias', 'weight']
-
to
(*args, **kwargs)[source]¶ Moves and/or casts the parameters and buffers.
This can be called as
-
to
(device)[source]
-
to
(dtype)[source]
-
to
(device, dtype)[source]
It has similar signature as
torch.Tensor.to()
, but does not take a Tensor and only takes in floating pointdtype
s. In particular, this method will only cast the floating point parameters and buffers todtype
. It will still move the integral parameters and buffers todevice
, if that is given. See below for examples.Note
This method modifies the module in-place.
Parameters: - device (
torch.device
) – the desired device of the parameters and buffers in this module - dtype (
torch.dtype
) – the desired floating point type of the floating point parameters and buffers in this module
Returns: self
Return type: Example:
>>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16)
-
-
train
(mode=True)[source]¶ Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.Returns: self Return type: Module
-
Sequential¶
-
class
torch.nn.
Sequential
(*args)[source]¶ A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in.
To make it easier to understand, here is a small example:
# Example of using Sequential model = nn.Sequential( nn.Conv2d(1,20,5), nn.ReLU(), nn.Conv2d(20,64,5), nn.ReLU() ) # Example of using Sequential with OrderedDict model = nn.Sequential(OrderedDict([ ('conv1', nn.Conv2d(1,20,5)), ('relu1', nn.ReLU()), ('conv2', nn.Conv2d(20,64,5)), ('relu2', nn.ReLU()) ]))
ModuleList¶
-
class
torch.nn.
ModuleList
(modules=None)[source]¶ Holds submodules in a list.
ModuleList can be indexed like a regular Python list, but modules it contains are properly registered, and will be visible by all Module methods.
Parameters: modules (iterable, optional) – an iterable of modules to add Example:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.linears = nn.ModuleList([nn.Linear(10, 10) for i in range(10)]) def forward(self, x): # ModuleList can act as an iterable, or be indexed using ints for i, l in enumerate(self.linears): x = self.linears[i // 2](x) + l(x) return x
ParameterList¶
-
class
torch.nn.
ParameterList
(parameters=None)[source]¶ Holds parameters in a list.
ParameterList can be indexed like a regular Python list, but parameters it contains are properly registered, and will be visible by all Module methods.
Parameters: parameters (iterable, optional) – an iterable of Parameter`
to addExample:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.params = nn.ParameterList([nn.Parameter(torch.randn(10, 10)) for i in range(10)]) def forward(self, x): # ParameterList can act as an iterable, or be indexed using ints for i, p in enumerate(self.params): x = self.params[i // 2].mm(x) + p.mm(x) return x
-
append
(parameter)[source]¶ Appends a given parameter at the end of the list.
Parameters: parameter (nn.Parameter) – parameter to append
-
Convolution layers¶
Conv1d¶
-
class
torch.nn.
Conv1d
(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]¶ Applies a 1D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size (N,Cin,L) and output (N,Cout,Lout) can be precisely described as:
out(Ni,Coutj)=bias(Coutj)+Cin−1∑k=0weight(Coutj,k)⋆input(Ni,k),where ⋆ is the valid cross-correlation operator, N is a batch size, C denotes a number of channels, L is a length of signal sequence.
stride
controls the stride for the cross-correlation, a single number or a one-element tuple.padding
controls the amount of implicit zero-paddings on both sides forpadding
number of points.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,- At groups=1, all inputs are convolved to all outputs.
- At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
- At groups=
in_channels
, each input channel is convolved with its own set of filters (of size ⌊out_channelsin_channels⌋).
Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.
Note
The configuration when groups == in_channels and out_channels == K * in_channels where K is a positive integer is termed in literature as depthwise convolution.
In other words, for an input of size (N,Cin,Lin), if you want a depthwise convolution with a depthwise multiplier K, then you use the constructor arguments (in_channels=Cin,out_channels=Cin∗K,...,groups=Cin)
Parameters: - in_channels (int) – Number of channels in the input image
- out_channels (int) – Number of channels produced by the convolution
- kernel_size (int or tuple) – Size of the convolving kernel
- stride (int or tuple, optional) – Stride of the convolution. Default: 1
- padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
- dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
- groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
- bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
- Shape:
Input: (N,Cin,Lin)
Output: (N,Cout,Lout) where
Lout=⌊Lin+2∗padding−dilation∗(kernel_size−1)−1stride+1⌋
Variables: Examples:
>>> m = nn.Conv1d(16, 33, 3, stride=2) >>> input = torch.randn(20, 16, 50) >>> output = m(input)
Conv2d¶
-
class
torch.nn.
Conv2d
(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]¶ Applies a 2D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size (N,Cin,H,W) and output (N,Cout,Hout,Wout) can be precisely described as:
out(Ni,Coutj)=bias(Coutj)+Cin−1∑k=0weight(Coutj,k)⋆input(Ni,k),where ⋆ is the valid 2D cross-correlation operator, N is a batch size, C denotes a number of channels, H is a height of input planes in pixels, and W is width in pixels.
stride
controls the stride for the cross-correlation, a single number or a tuple.padding
controls the amount of implicit zero-paddings on both sides forpadding
number of points for each dimension.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,- At groups=1, all inputs are convolved to all outputs.
- At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
- At groups=
in_channels
, each input channel is convolved with its own set of filters (of size ⌊out_channelsin_channels⌋).
The parameters
kernel_size
,stride
,padding
,dilation
can either be:- a single
int
– in which case the same value is used for the height and width dimension - a
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.
Note
The configuration when groups == in_channels and out_channels == K * in_channels where K is a positive integer is termed in literature as depthwise convolution.
In other words, for an input of size (N,Cin,Hin,Win), if you want a depthwise convolution with a depthwise multiplier K, then you use the constructor arguments (in_channels=Cin,out_channels=Cin∗K,...,groups=Cin)
Parameters: - in_channels (int) – Number of channels in the input image
- out_channels (int) – Number of channels produced by the convolution
- kernel_size (int or tuple) – Size of the convolving kernel
- stride (int or tuple, optional) – Stride of the convolution. Default: 1
- padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
- dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
- groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
- bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
- Shape:
Input: (N,Cin,Hin,Win)
Output: (N,Cout,Hout,Wout) where
Hout=⌊Hin+2∗padding[0]−dilation[0]∗(kernel_size[0]−1)−1stride[0]+1⌋Wout=⌊Win+2∗padding[1]−dilation[1]∗(kernel_size[1]−1)−1stride[1]+1⌋
Variables: Examples:
>>> # With square kernels and equal stride >>> m = nn.Conv2d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >>> # non-square kernels and unequal stride and with padding and dilation >>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1)) >>> input = torch.randn(20, 16, 50, 100) >>> output = m(input)
Conv3d¶
-
class
torch.nn.
Conv3d
(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]¶ Applies a 3D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size (N,Cin,D,H,W) and output (N,Cout,Dout,Hout,Wout) can be precisely described as:
out(Ni,Coutj)=bias(Coutj)+Cin−1∑k=0weight(Coutj,k)⋆input(Ni,k),where ⋆ is the valid 3D cross-correlation operator
stride
controls the stride for the cross-correlation.padding
controls the amount of implicit zero-paddings on both sides forpadding
number of points for each dimension.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,- At groups=1, all inputs are convolved to all outputs.
- At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
- At groups=
in_channels
, each input channel is convolved with its own set of filters (of size ⌊out_channelsin_channels⌋).
The parameters
kernel_size
,stride
,padding
,dilation
can either be:- a single
int
– in which case the same value is used for the depth, height and width dimension - a
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.
Note
The configuration when groups == in_channels and out_channels == K * in_channels where K is a positive integer is termed in literature as depthwise convolution.
In other words, for an input of size (N,Cin,Din,Hin,Win), if you want a depthwise convolution with a depthwise multiplier K, then you use the constructor arguments (in_channels=Cin,out_channels=Cin∗K,...,groups=Cin)
Parameters: - in_channels (int) – Number of channels in the input image
- out_channels (int) – Number of channels produced by the convolution
- kernel_size (int or tuple) – Size of the convolving kernel
- stride (int or tuple, optional) – Stride of the convolution. Default: 1
- padding (int or tuple, optional) – Zero-padding added to all three sides of the input. Default: 0
- dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
- groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
- bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
- Shape:
Input: (N,Cin,Din,Hin,Win)
Output: (N,Cout,Dout,Hout,Wout) where
Dout=⌊Din+2∗padding[0]−dilation[0]∗(kernel_size[0]−1)−1stride[0]+1⌋Hout=⌊Hin+2∗padding[1]−dilation[1]∗(kernel_size[1]−1)−1stride[1]+1⌋Wout=⌊Win+2∗padding[2]−dilation[2]∗(kernel_size[2]−1)−1stride[2]+1⌋
Variables: Examples:
>>> # With square kernels and equal stride >>> m = nn.Conv3d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0)) >>> input = torch.randn(20, 16, 10, 50, 100) >>> output = m(input)
ConvTranspose1d¶
-
class
torch.nn.
ConvTranspose1d
(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[source]¶ Applies a 1D transposed convolution operator over an input image composed of several input planes.
This module can be seen as the gradient of Conv1d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation).
stride
controls the stride for the cross-correlation.padding
controls the amount of implicit zero-paddings on both sides forpadding
number of points.output_padding
controls the amount of implicit zero-paddings on both sides of the output foroutput_padding
number of points. number of points.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,- At groups=1, all inputs are convolved to all outputs.
- At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
- At groups=
in_channels
, each input channel is convolved with its own set of filters (of size ⌊out_channelsin_channels⌋).
Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.
Note
The
padding
argument effectively addskernel_size - 1 - padding
amount of zero padding to both sizes of the input. This is set so that when aConv1d
and aConvTranspose1d
are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, when :attr`stride`>1
,Conv1d
maps multiple input shapes to the same output shape.output_padding
is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note thatoutput_padding
is only used to find output shape, but does not actually add zero-padding to output.Parameters: - in_channels (int) – Number of channels in the input image
- out_channels (int) – Number of channels produced by the convolution
- kernel_size (int or tuple) – Size of the convolving kernel
- stride (int or tuple, optional) – Stride of the convolution. Default: 1
- padding (int or tuple, optional) –
kernel_size - 1 - padding
zero-padding will be added to both sides of the input. Default: 0 - output_padding (int or tuple, optional) – Additional size added to one side of the output shape. Default: 0
- groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
- bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
- dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
- Shape:
Input: (N,Cin,Lin)
Output: (N,Cout,Lout) where
Lout=(Lin−1)∗stride−2∗padding+kernel_size+output_padding
Variables:
ConvTranspose2d¶
-
class
torch.nn.
ConvTranspose2d
(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[source]¶ Applies a 2D transposed convolution operator over an input image composed of several input planes.
This module can be seen as the gradient of Conv2d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation).
stride
controls the stride for the cross-correlation.padding
controls the amount of implicit zero-paddings on both sides forpadding
number of points for each dimension.output_padding
controls the amount of implicit zero-paddings on both sides of the output foroutput_padding
number of points for each dimension.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,- At groups=1, all inputs are convolved to all outputs.
- At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
- At groups=
in_channels
, each input channel is convolved with its own set of filters (of size ⌊out_channelsin_channels⌋).
The parameters
kernel_size
,stride
,padding
,output_padding
can either be:- a single
int
– in which case the same value is used for the height and width dimensions - a
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.
Note
The
padding
argument effectively addskernel_size - 1 - padding
amount of zero padding to both sizes of the input. This is set so that when aConv2d
and aConvTranspose2d
are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, when :attr`stride`>1
,Conv2d
maps multiple input shapes to the same output shape.output_padding
is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note thatoutput_padding
is only used to find output shape, but does not actually add zero-padding to output.Parameters: - in_channels (int) – Number of channels in the input image
- out_channels (int) – Number of channels produced by the convolution
- kernel_size (int or tuple) – Size of the convolving kernel
- stride (int or tuple, optional) – Stride of the convolution. Default: 1
- padding (int or tuple, optional) –
kernel_size - 1 - padding
zero-padding will be added to both sides of each dimension in the input. Default: 0 - output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0
- groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
- bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
- dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
- Shape:
Input: (N,Cin,Hin,Win)
Output: (N,Cout,Hout,Wout) where
Hout=(Hin−1)∗stride[0]−2∗padding[0]+kernel_size[0]+output_padding[0]Wout=(Win−1)∗stride[1]−2∗padding[1]+kernel_size[1]+output_padding[1]
Variables: Examples:
>>> # With square kernels and equal stride >>> m = nn.ConvTranspose2d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >>> input = torch.randn(20, 16, 50, 100) >>> output = m(input) >>> # exact output size can be also specified as an argument >>> input = torch.randn(1, 16, 12, 12) >>> downsample = nn.Conv2d(16, 16, 3, stride=2, padding=1) >>> upsample = nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1) >>> h = downsample(input) >>> h.size() torch.Size([1, 16, 6, 6]) >>> output = upsample(h, output_size=input.size()) >>> output.size() torch.Size([1, 16, 12, 12])
ConvTranspose3d¶
-
class
torch.nn.
ConvTranspose3d
(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[source]¶ Applies a 3D transposed convolution operator over an input image composed of several input planes. The transposed convolution operator multiplies each input value element-wise by a learnable kernel, and sums over the outputs from all input feature planes.
This module can be seen as the gradient of Conv3d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation).
stride
controls the stride for the cross-correlation.padding
controls the amount of implicit zero-paddings on both sides forpadding
number of points for each dimension.output_padding
controls the amount of implicit zero-paddings on both sides of the output foroutput_padding
number of points for each dimension.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,- At groups=1, all inputs are convolved to all outputs.
- At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
- At groups=
in_channels
, each input channel is convolved with its own set of filters (of size ⌊out_channelsin_channels⌋).
The parameters
kernel_size
,stride
,padding
,output_padding
can either be:- a single
int
– in which case the same value is used for the depth, height and width dimensions - a
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.
Note
The
padding
argument effectively addskernel_size - 1 - padding
amount of zero padding to both sizes of the input. This is set so that when aConv3d
and aConvTranspose3d
are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, when :attr`stride`>1
,Conv3d
maps multiple input shapes to the same output shape.output_padding
is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note thatoutput_padding
is only used to find output shape, but does not actually add zero-padding to output.Parameters: - in_channels (int) – Number of channels in the input image
- out_channels (int) – Number of channels produced by the convolution
- kernel_size (int or tuple) – Size of the convolving kernel
- stride (int or tuple, optional) – Stride of the convolution. Default: 1
- padding (int or tuple, optional) –
kernel_size - 1 - padding
zero-padding will be added to both sides of each dimension in the input. Default: 0 - output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0
- groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
- bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
- dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
- Shape:
Input: (N,Cin,Din,Hin,Win)
Output: (N,Cout,Dout,Hout,Wout) where
Dout=(Din−1)∗stride[0]−2∗padding[0]+kernel_size[0]+output_padding[0]Hout=(Hin−1)∗stride[1]−2∗padding[1]+kernel_size[1]+output_padding[1]Wout=(Win−1)∗stride[2]−2∗padding[2]+kernel_size[2]+output_padding[2]
Variables: Examples:
>>> # With square kernels and equal stride >>> m = nn.ConvTranspose3d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(0, 4, 2)) >>> input = torch.randn(20, 16, 10, 50, 100) >>> output = m(input)
Pooling layers¶
MaxPool1d¶
-
class
torch.nn.
MaxPool1d
(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)[source]¶ Applies a 1D max pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size (N,C,L) and output (N,C,Lout) can be precisely described as:
out(Ni,Cj,k)=maxIf
padding
is non-zero, then the input is implicitly zero-padded on both sides forpadding
number of points.dilation
controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of whatdilation
does.Parameters: - kernel_size – the size of the window to take a max over
- stride – the stride of the window. Default value is
kernel_size
- padding – implicit zero padding to be added on both sides
- dilation – a parameter that controls the stride of elements in the window
- return_indices – if
True
, will return the max indices along with the outputs. Useful when Unpooling later - ceil_mode – when True, will use ceil instead of floor to compute the output shape
- Shape:
Input: (N, C, L_{in})
Output: (N, C, L_{out}) where
L_{out} = \left\lfloor \frac{L_{in} + 2 * \text{padding} - \text{dilation} * (\text{kernel_size} - 1) - 1}{\text{stride}} + 1\right\rfloor
Examples:
>>> # pool of size=3, stride=2 >>> m = nn.MaxPool1d(3, stride=2) >>> input = torch.randn(20, 16, 50) >>> output = m(input)
MaxPool2d¶
-
class
torch.nn.
MaxPool2d
(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)[source]¶ Applies a 2D max pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size (N, C, H, W), output (N, C, H_{out}, W_{out}) and
kernel_size
(kH, kW) can be precisely described as:\begin{equation*} \text{out}(N_i, C_j, h, w) = \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \text{input}(N_i, C_j, \text{stride}[0] * h + m, \text{stride}[1] * w + n) \end{equation*}If
padding
is non-zero, then the input is implicitly zero-padded on both sides forpadding
number of points.dilation
controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of whatdilation
does.The parameters
kernel_size
,stride
,padding
,dilation
can either be:- a single
int
– in which case the same value is used for the height and width dimension - a
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Parameters: - kernel_size – the size of the window to take a max over
- stride – the stride of the window. Default value is
kernel_size
- padding – implicit zero padding to be added on both sides
- dilation – a parameter that controls the stride of elements in the window
- return_indices – if
True
, will return the max indices along with the outputs. Useful when Unpooling later - ceil_mode – when True, will use ceil instead of floor to compute the output shape
- Shape:
Input: (N, C, H_{in}, W_{in})
Output: (N, C, H_{out}, W_{out}) where
\begin{align}\begin{aligned}H_{out} = \left\lfloor\frac{H_{in} + 2 * \text{padding}[0] - \text{dilation}[0] * (\text{kernel_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor\\W_{out} = \left\lfloor\frac{W_{in} + 2 * \text{padding}[1] - \text{dilation}[1] * (\text{kernel_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor\end{aligned}\end{align}
Examples:
>>> # pool of square window of size=3, stride=2 >>> m = nn.MaxPool2d(3, stride=2) >>> # pool of non-square window >>> m = nn.MaxPool2d((3, 2), stride=(2, 1)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input)
- a single
MaxPool3d¶
-
class
torch.nn.
MaxPool3d
(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)[source]¶ Applies a 3D max pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size (N, C, D, H, W), output (N, C, D_{out}, H_{out}, W_{out}) and
kernel_size
(kD, kH, kW) can be precisely described as:\begin{split}\begin{align*} \text{out}(N_i, C_j, d, h, w) &= \max_{k=0, \ldots, kD-1} \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \text{input}(N_i, C_j, \text{stride}[0] * k + d,\\ &\text{stride}[1] * h + m, \text{stride}[2] * w + n) \end{align*}\end{split}If
padding
is non-zero, then the input is implicitly zero-padded on both sides forpadding
number of points.dilation
controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of whatdilation
does.The parameters
kernel_size
,stride
,padding
,dilation
can either be:- a single
int
– in which case the same value is used for the depth, height and width dimension - a
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
Parameters: - kernel_size – the size of the window to take a max over
- stride – the stride of the window. Default value is
kernel_size
- padding – implicit zero padding to be added on all three sides
- dilation – a parameter that controls the stride of elements in the window
- return_indices – if
True
, will return the max indices along with the outputs. Useful when Unpooling later - ceil_mode – when True, will use ceil instead of floor to compute the output shape
- Shape:
Input: (N, C, D_{in}, H_{in}, W_{in})
Output: (N, C, D_{out}, H_{out}, W_{out}) where
\begin{align}\begin{aligned}D_{out} = \left\lfloor\frac{D_{in} + 2 * \text{padding}[0] - \text{dilation}[0] * (\text{kernel_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor\\H_{out} = \left\lfloor\frac{H_{in} + 2 * \text{padding}[1] - \text{dilation}[1] * (\text{kernel_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor\\W_{out} = \left\lfloor\frac{W_{in} + 2 * \text{padding}[2] - \text{dilation}[2] * (\text{kernel_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor\end{aligned}\end{align}
Examples:
>>> # pool of square window of size=3, stride=2 >>> m = nn.MaxPool3d(3, stride=2) >>> # pool of non-square window >>> m = nn.MaxPool3d((3, 2, 2), stride=(2, 1, 2)) >>> input = torch.randn(20, 16, 50,44, 31) >>> output = m(input)
- a single
MaxUnpool1d¶
-
class
torch.nn.
MaxUnpool1d
(kernel_size, stride=None, padding=0)[source]¶ Computes a partial inverse of
MaxPool1d
.MaxPool1d
is not fully invertible, since the non-maximal values are lost.MaxUnpool1d
takes in as input the output ofMaxPool1d
including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.Note
MaxPool1d can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument output_size in the forward call. See the Inputs and Example below.
Parameters: - Inputs:
- input: the input Tensor to invert
- indices: the indices given out by MaxPool1d
- output_size (optional) : a torch.Size that specifies the targeted output size
- Shape:
Input: (N, C, H_{in})
Output: (N, C, H_{out}) where
H_{out} = (H_{in} - 1) * \text{stride}[0] - 2 * \text{padding}[0] + \text{kernel_size}[0]or as given by
output_size
in the call operator
Example:
>>> pool = nn.MaxPool1d(2, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool1d(2, stride=2) >>> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8]]]) >>> output, indices = pool(input) >>> unpool(output, indices) tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]]) >>> # Example showcasing the use of output_size >>> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8, 9]]]) >>> output, indices = pool(input) >>> unpool(output, indices, output_size=input.size()) tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8., 0.]]]) >>> unpool(output, indices) tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]])
MaxUnpool2d¶
-
class
torch.nn.
MaxUnpool2d
(kernel_size, stride=None, padding=0)[source]¶ Computes a partial inverse of
MaxPool2d
.MaxPool2d
is not fully invertible, since the non-maximal values are lost.MaxUnpool2d
takes in as input the output ofMaxPool2d
including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.Note
MaxPool2d can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument output_size in the forward call. See the Inputs and Example below.
Parameters: - Inputs:
- input: the input Tensor to invert
- indices: the indices given out by MaxPool2d
- output_size (optional) : a torch.Size that specifies the targeted output size
- Shape:
Input: (N, C, H_{in}, W_{in})
Output: (N, C, H_{out}, W_{out}) where
\begin{align}\begin{aligned}H_{out} = (H_{in} - 1) * \text{stride}[0] - 2 * \text{padding}[0] + \text{kernel_size}[0]\\W_{out} = (W_{in} - 1) * \text{stride}[1] - 2 * \text{padding}[1] + \text{kernel_size}[1]\end{aligned}\end{align}or as given by
output_size
in the call operator
Example:
>>> pool = nn.MaxPool2d(2, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool2d(2, stride=2) >>> input = torch.tensor([[[[ 1., 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]]]]) >>> output, indices = pool(input) >>> unpool(output, indices) tensor([[[[ 0., 0., 0., 0.], [ 0., 6., 0., 8.], [ 0., 0., 0., 0.], [ 0., 14., 0., 16.]]]]) >>> # specify a different output size than input size >>> unpool(output, indices, output_size=torch.Size([1, 1, 5, 5])) tensor([[[[ 0., 0., 0., 0., 0.], [ 6., 0., 8., 0., 0.], [ 0., 0., 0., 14., 0.], [ 16., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]]]])
MaxUnpool3d¶
-
class
torch.nn.
MaxUnpool3d
(kernel_size, stride=None, padding=0)[source]¶ Computes a partial inverse of
MaxPool3d
.MaxPool3d
is not fully invertible, since the non-maximal values are lost.MaxUnpool3d
takes in as input the output ofMaxPool3d
including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.Note
MaxPool3d can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument output_size in the forward call. See the Inputs section below.
Parameters: - Inputs:
- input: the input Tensor to invert
- indices: the indices given out by MaxPool3d
- output_size (optional) : a torch.Size that specifies the targeted output size
- Shape:
Input: (N, C, D_{in}, H_{in}, W_{in})
Output: (N, C, D_{out}, H_{out}, W_{out}) where
\begin{align}\begin{aligned}D_{out} = (D_{in} - 1) * \text{stride}[0] - 2 * \text{padding}[0] + \text{kernel_size}[0]\\H_{out} = (H_{in} - 1) * \text{stride}[1] - 2 * \text{padding}[1] + \text{kernel_size}[1]\\W_{out} = (W_{in} - 1) * \text{stride}[2] - 2 * \text{padding}[2] + \text{kernel_size}[2]\end{aligned}\end{align}or as given by
output_size
in the call operator
Example:
>>> # pool of square window of size=3, stride=2 >>> pool = nn.MaxPool3d(3, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool3d(3, stride=2) >>> output, indices = pool(torch.randn(20, 16, 51, 33, 15)) >>> unpooled_output = unpool(output, indices) >>> unpooled_output.size() torch.Size([20, 16, 51, 33, 15])
AvgPool1d¶
-
class
torch.nn.
AvgPool1d
(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]¶ Applies a 1D average pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size (N, C, L), output (N, C, L_{out}) and
kernel_size
k can be precisely described as:\begin{equation*} \text{out}(N_i, C_j, l) = \frac{1}{k} \sum_{m=0}^{k} \text{input}(N_i, C_j, \text{stride} * l + m) \end{equation*}If
padding
is non-zero, then the input is implicitly zero-padded on both sides forpadding
number of points.The parameters
kernel_size
,stride
,padding
can each be anint
or a one-element tuple.Parameters: - kernel_size – the size of the window
- stride – the stride of the window. Default value is
kernel_size
- padding – implicit zero padding to be added on both sides
- ceil_mode – when True, will use ceil instead of floor to compute the output shape
- count_include_pad – when True, will include the zero-padding in the averaging calculation
- Shape:
Input: (N, C, L_{in})
Output: (N, C, L_{out}) where
L_{out} = \left\lfloor \frac{L_{in} + 2 * \text{padding} - \text{kernel_size}}{\text{stride}} + 1\right\rfloor
Examples:
>>> # pool with window of size=3, stride=2 >>> m = nn.AvgPool1d(3, stride=2) >>> m(torch.tensor([[[1.,2,3,4,5,6,7]]])) tensor([[[ 2., 4., 6.]]])
AvgPool2d¶
-
class
torch.nn.
AvgPool2d
(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]¶ Applies a 2D average pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size (N, C, H, W), output (N, C, H_{out}, W_{out}) and
kernel_size
(kH, kW) can be precisely described as:\begin{equation*} \text{out}(N_i, C_j, h, w) = \frac{1}{kH * kW} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} \text{input}(N_i, C_j, \text{stride}[0] * h + m, \text{stride}[1] * w + n) \end{equation*}If
padding
is non-zero, then the input is implicitly zero-padded on both sides forpadding
number of points.The parameters
kernel_size
,stride
,padding
can either be:- a single
int
– in which case the same value is used for the height and width dimension - a
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Parameters: - kernel_size – the size of the window
- stride – the stride of the window. Default value is
kernel_size
- padding – implicit zero padding to be added on both sides
- ceil_mode – when True, will use ceil instead of floor to compute the output shape
- count_include_pad – when True, will include the zero-padding in the averaging calculation
- Shape:
Input: (N, C, H_{in}, W_{in})
Output: (N, C, H_{out}, W_{out}) where
\begin{align}\begin{aligned}H_{out} = \left\lfloor\frac{H_{in} + 2 * \text{padding}[0] - \text{kernel_size}[0]}{\text{stride}[0]} + 1\right\rfloor\\W_{out} = \left\lfloor\frac{W_{in} + 2 * \text{padding}[1] - \text{kernel_size}[1]}{\text{stride}[1]} + 1\right\rfloor\end{aligned}\end{align}
Examples:
>>> # pool of square window of size=3, stride=2 >>> m = nn.AvgPool2d(3, stride=2) >>> # pool of non-square window >>> m = nn.AvgPool2d((3, 2), stride=(2, 1)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input)
- a single
AvgPool3d¶
-
class
torch.nn.
AvgPool3d
(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]¶ Applies a 3D average pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size (N, C, D, H, W), output (N, C, D_{out}, H_{out}, W_{out}) and
kernel_size
(kD, kH, kW) can be precisely described as:\begin{equation*} \text{out}(N_i, C_j, d, h, w) = \sum_{k=0}^{kD-1} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} \frac{\text{input}(N_i, C_j, \text{stride}[0] * d + k, \text{stride}[1] * h + m, \text{stride}[2] * w + n)} {kD * kH * kW} \end{equation*}If
padding
is non-zero, then the input is implicitly zero-padded on all three sides forpadding
number of points.The parameters
kernel_size
,stride
can either be:- a single
int
– in which case the same value is used for the depth, height and width dimension - a
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
Parameters: - kernel_size – the size of the window
- stride – the stride of the window. Default value is
kernel_size
- padding – implicit zero padding to be added on all three sides
- ceil_mode – when True, will use ceil instead of floor to compute the output shape
- count_include_pad – when True, will include the zero-padding in the averaging calculation
- Shape:
Input: (N, C, D_{in}, H_{in}, W_{in})
Output: (N, C, D_{out}, H_{out}, W_{out}) where
\begin{align}\begin{aligned}D_{out} = \left\lfloor\frac{D_{in} + 2 * \text{padding}[0] - \text{kernel_size}[0]}{\text{stride}[0]} + 1\right\rfloor\\H_{out} = \left\lfloor\frac{H_{in} + 2 * \text{padding}[1] - \text{kernel_size}[1]}{\text{stride}[1]} + 1\right\rfloor\\W_{out} = \left\lfloor\frac{W_{in} + 2 * \text{padding}[2] - \text{kernel_size}[2]}{\text{stride}[2]} + 1\right\rfloor\end{aligned}\end{align}
Examples:
>>> # pool of square window of size=3, stride=2 >>> m = nn.AvgPool3d(3, stride=2) >>> # pool of non-square window >>> m = nn.AvgPool3d((3, 2, 2), stride=(2, 1, 2)) >>> input = torch.randn(20, 16, 50,44, 31) >>> output = m(input)
- a single
FractionalMaxPool2d¶
-
class
torch.nn.
FractionalMaxPool2d
(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)[source]¶ Applies a 2D fractional max pooling over an input signal composed of several input planes.
Fractional MaxPooling is described in detail in the paper Fractional MaxPooling by Ben Graham
The max-pooling operation is applied in kHxkW regions by a stochastic step size determined by the target output size. The number of output features is equal to the number of input planes.
Parameters: - kernel_size – the size of the window to take a max over. Can be a single number k (for a square kernel of k x k) or a tuple (kh x kw)
- output_size – the target output size of the image of the form oH x oW. Can be a tuple (oH, oW) or a single number oH for a square image oH x oH
- output_ratio – If one wants to have an output size as a ratio of the input size, this option can be given. This has to be a number or tuple in the range (0, 1)
- return_indices – if
True
, will return the indices along with the outputs. Useful to pass tonn.MaxUnpool2d()
. Default:False
Examples
>>> # pool of square window of size=3, and target output size 13x12 >>> m = nn.FractionalMaxPool2d(3, output_size=(13, 12)) >>> # pool of square window and target output size being half of input image size >>> m = nn.FractionalMaxPool2d(3, output_ratio=(0.5, 0.5)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input)
LPPool1d¶
-
class
torch.nn.
LPPool1d
(norm_type, kernel_size, stride=None, ceil_mode=False)[source]¶ Applies a 1D power-average pooling over an input signal composed of several input planes.
On each window, the function computed is:
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}- At p = infinity, one gets Max Pooling
- At p = 1, one gets Sum Pooling (which is proportional to Average Pooling)
Parameters: - kernel_size – a single int, the size of the window
- stride – a single int, the stride of the window. Default value is
kernel_size
- ceil_mode – when True, will use ceil instead of floor to compute the output shape
- Shape:
Input: (N, C, L_{in})
Output: (N, C, L_{out}) where
L_{out} = \left\lfloor\frac{L_{in} + 2 * \text{padding} - \text{kernel_size}}{\text{stride}} + 1\right\rfloor
- Examples::
>>> # power-2 pool of window of length 3, with stride 2. >>> m = nn.LPPool1d(2, 3, stride=2) >>> input = torch.randn(20, 16, 50) >>> output = m(input)
LPPool2d¶
-
class
torch.nn.
LPPool2d
(norm_type, kernel_size, stride=None, ceil_mode=False)[source]¶ Applies a 2D power-average pooling over an input signal composed of several input planes.
On each window, the function computed is:
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}- At p = \infty, one gets Max Pooling
- At p = 1, one gets Sum Pooling (which is proportional to Average Pooling)
The parameters
kernel_size
,stride
can either be:- a single
int
– in which case the same value is used for the height and width dimension - a
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Parameters: - kernel_size – the size of the window
- stride – the stride of the window. Default value is
kernel_size
- ceil_mode – when True, will use ceil instead of floor to compute the output shape
- Shape:
Input: (N, C, H_{in}, W_{in})
Output: (N, C, H_{out}, W_{out}) where
\begin{align}\begin{aligned}H_{out} = \left\lfloor\frac{H_{in} + 2 * \text{padding}[0] - \text{dilation}[0] * (\text{kernel_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor\\W_{out} = \left\lfloor\frac{W_{in} + 2 * \text{padding}[1] - \text{dilation}[1] * (\text{kernel_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor\end{aligned}\end{align}
Examples:
>>> # power-2 pool of square window of size=3, stride=2 >>> m = nn.LPPool2d(2, 3, stride=2) >>> # pool of non-square window of power 1.2 >>> m = nn.LPPool2d(1.2, (3, 2), stride=(2, 1)) >>> input = torch.randn(20, 16, 50, 32) >>> output = m(input)
AdaptiveMaxPool1d¶
-
class
torch.nn.
AdaptiveMaxPool1d
(output_size, return_indices=False)[source]¶ Applies a 1D adaptive max pooling over an input signal composed of several input planes.
The output size is H, for any input size. The number of output features is equal to the number of input planes.
Parameters: - output_size – the target output size H
- return_indices – if
True
, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool1d. Default:False
Examples
>>> # target output size of 5 >>> m = nn.AdaptiveMaxPool1d(5) >>> input = torch.randn(1, 64, 8) >>> output = m(input)
AdaptiveMaxPool2d¶
-
class
torch.nn.
AdaptiveMaxPool2d
(output_size, return_indices=False)[source]¶ Applies a 2D adaptive max pooling over an input signal composed of several input planes.
The output is of size H x W, for any input size. The number of output features is equal to the number of input planes.
Parameters: - output_size – the target output size of the image of the form H x W.
Can be a tuple (H, W) or a single H for a square image H x H.
H and W can be either a
int
, orNone
which means the size will be the same as that of the input. - return_indices – if
True
, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool2d. Default:False
Examples
>>> # target output size of 5x7 >>> m = nn.AdaptiveMaxPool2d((5,7)) >>> input = torch.randn(1, 64, 8, 9) >>> output = m(input) >>> # target output size of 7x7 (square) >>> m = nn.AdaptiveMaxPool2d(7) >>> input = torch.randn(1, 64, 10, 9) >>> output = m(input) >>> # target output size of 10x7 >>> m = nn.AdaptiveMaxPool2d((None, 7)) >>> input = torch.randn(1, 64, 10, 9) >>> output = m(input)
- output_size – the target output size of the image of the form H x W.
Can be a tuple (H, W) or a single H for a square image H x H.
H and W can be either a
AdaptiveMaxPool3d¶
-
class
torch.nn.
AdaptiveMaxPool3d
(output_size, return_indices=False)[source]¶ Applies a 3D adaptive max pooling over an input signal composed of several input planes.
The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes.
Parameters: - output_size – the target output size of the image of the form D x H x W.
Can be a tuple (D, H, W) or a single D for a cube D x D x D.
D, H and W can be either a
int
, orNone
which means the size will be the same as that of the input. - return_indices – if
True
, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool3d. Default:False
Examples
>>> # target output size of 5x7x9 >>> m = nn.AdaptiveMaxPool3d((5,7,9)) >>> input = torch.randn(1, 64, 8, 9, 10) >>> output = m(input) >>> # target output size of 7x7x7 (cube) >>> m = nn.AdaptiveMaxPool3d(7) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input) >>> # target output size of 7x9x8 >>> m = nn.AdaptiveMaxPool3d((7, None, None)) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input)
- output_size – the target output size of the image of the form D x H x W.
Can be a tuple (D, H, W) or a single D for a cube D x D x D.
D, H and W can be either a
AdaptiveAvgPool1d¶
-
class
torch.nn.
AdaptiveAvgPool1d
(output_size)[source]¶ Applies a 1D adaptive average pooling over an input signal composed of several input planes.
The output size is H, for any input size. The number of output features is equal to the number of input planes.
Parameters: output_size – the target output size H Examples
>>> # target output size of 5 >>> m = nn.AdaptiveAvgPool1d(5) >>> input = torch.randn(1, 64, 8) >>> output = m(input)
AdaptiveAvgPool2d¶
-
class
torch.nn.
AdaptiveAvgPool2d
(output_size)[source]¶ Applies a 2D adaptive average pooling over an input signal composed of several input planes.
The output is of size H x W, for any input size. The number of output features is equal to the number of input planes.
Parameters: output_size – the target output size of the image of the form H x W. Can be a tuple (H, W) or a single H for a square image H x H H and W can be either a int
, orNone
which means the size will be the same as that of the input.Examples
>>> # target output size of 5x7 >>> m = nn.AdaptiveAvgPool2d((5,7)) >>> input = torch.randn(1, 64, 8, 9) >>> output = m(input) >>> # target output size of 7x7 (square) >>> m = nn.AdaptiveAvgPool2d(7) >>> input = torch.randn(1, 64, 10, 9) >>> output = m(input) >>> # target output size of 10x7 >>> m = nn.AdaptiveMaxPool2d((None, 7)) >>> input = torch.randn(1, 64, 10, 9) >>> output = m(input)
AdaptiveAvgPool3d¶
-
class
torch.nn.
AdaptiveAvgPool3d
(output_size)[source]¶ Applies a 3D adaptive average pooling over an input signal composed of several input planes.
The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes.
Parameters: output_size – the target output size of the form D x H x W. Can be a tuple (D, H, W) or a single number D for a cube D x D x D D, H and W can be either a int
, orNone
which means the size will be the same as that of the input.Examples
>>> # target output size of 5x7x9 >>> m = nn.AdaptiveAvgPool3d((5,7,9)) >>> input = torch.randn(1, 64, 8, 9, 10) >>> output = m(input) >>> # target output size of 7x7x7 (cube) >>> m = nn.AdaptiveAvgPool3d(7) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input) >>> # target output size of 7x9x8 >>> m = nn.AdaptiveMaxPool3d((7, None, None)) >>> input = torch.randn(1, 64, 10, 9, 8) >>> output = m(input)
Padding layers¶
ReflectionPad1d¶
-
class
torch.nn.
ReflectionPad1d
(padding)[source]¶ Pads the input tensor using the reflection of the input boundary.
For N`d-padding, use :func:`torch.nn.functional.pad().
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 2-tuple, uses (paddingLeft, paddingRight) - Shape:
- Input: (N, C, W_{in})
- Output: (N, C, W_{out}) where W_{out} = W_{in} + \textit{paddingLeft} + \textit{paddingRight}
Examples:
>>> m = nn.ReflectionPad1d(2) >>> input = torch.arange(8).reshape(1, 2, 4) >>> input (0 ,.,.) = 0 1 2 3 4 5 6 7 [torch.FloatTensor of size (1,2,4)] >>> m(input) (0 ,.,.) = 2 1 0 1 2 3 2 1 6 5 4 5 6 7 6 5 [torch.FloatTensor of size (1,2,8)] >>> # using different paddings >>> m = nn.ReflectionPad1d((3, 1)) >>> m(input) (0 ,.,.) = 3 2 1 0 1 2 3 2 7 6 5 4 5 6 7 6 [torch.FloatTensor of size (1,2,8)]
ReflectionPad2d¶
-
class
torch.nn.
ReflectionPad2d
(padding)[source]¶ Pads the input tensor using the reflection of the input boundary.
For N`d-padding, use :func:`torch.nn.functional.pad().
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom) - Shape:
- Input: (N, C, H_{in}, W_{in})
- Output: (N, C, H_{out}, W_{out}) where H_{out} = H_{in} + \textit{paddingTop} + \textit{paddingBottom} W_{out} = W_{in} + \textit{paddingLeft} + \textit{paddingRight}
Examples:
>>> m = nn.ReflectionPad2d(2) >>> input = torch.arange(9).reshape(1, 1, 3, 3) >>> input (0 ,0 ,.,.) = 0 1 2 3 4 5 6 7 8 [torch.FloatTensor of size (1,1,3,3)] >>> m(input) (0 ,0 ,.,.) = 8 7 6 7 8 7 6 5 4 3 4 5 4 3 2 1 0 1 2 1 0 5 4 3 4 5 4 3 8 7 6 7 8 7 6 5 4 3 4 5 4 3 2 1 0 1 2 1 0 [torch.FloatTensor of size (1,1,7,7)] >>> # using different paddings >>> m = nn.ReflectionPad2d((1, 1, 2, 0)) >>> m(input) (0 ,0 ,.,.) = 7 6 7 8 7 4 3 4 5 4 1 0 1 2 1 4 3 4 5 4 7 6 7 8 7 [torch.FloatTensor of size (1,1,5,5)]
ReplicationPad1d¶
-
class
torch.nn.
ReplicationPad1d
(padding)[source]¶ Pads the input tensor using replication of the input boundary.
For N`d-padding, use :func:`torch.nn.functional.pad().
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 2-tuple, uses (paddingLeft, paddingRight) - Shape:
- Input: (N, C, W_{in})
- Output: (N, C, W_{out}) where W_{out} = W_{in} + \textit{paddingLeft} + \textit{paddingRight}
Examples:
>>> m = nn.ReplicationPad1d(2) >>> input = torch.arange(8).reshape(1, 2, 4) >>> input (0 ,.,.) = 0 1 2 3 4 5 6 7 [torch.FloatTensor of size (1,2,4)] >>> m(input) (0 ,.,.) = 0 0 0 1 2 3 3 3 4 4 4 5 6 7 7 7 [torch.FloatTensor of size (1,2,8)] >>> # using different paddings >>> m = nn.ReplicationPad1d((3, 1)) >>> m(input) (0 ,.,.) = 0 0 0 0 1 2 3 3 4 4 4 4 5 6 7 7 [torch.FloatTensor of size (1,2,8)]
ReplicationPad2d¶
-
class
torch.nn.
ReplicationPad2d
(padding)[source]¶ Pads the input tensor using replication of the input boundary.
For N`d-padding, use :func:`torch.nn.functional.pad().
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom) - Shape:
- Input: (N, C, H_{in}, W_{in})
- Output: (N, C, H_{out}, W_{out}) where H_{out} = H_{in} + \textit{paddingTop} + \textit{paddingBottom} W_{out} = W_{in} + \textit{paddingLeft} + \textit{paddingRight}
Examples:
>>> m = nn.ReplicationPad2d(2) >>> input = torch.arange(9).reshape(1, 1, 3, 3) >>> input (0 ,0 ,.,.) = 0 1 2 3 4 5 6 7 8 [torch.FloatTensor of size (1,1,3,3)] >>> m(input) (0 ,0 ,.,.) = 0 0 0 1 2 2 2 0 0 0 1 2 2 2 0 0 0 1 2 2 2 3 3 3 4 5 5 5 6 6 6 7 8 8 8 6 6 6 7 8 8 8 6 6 6 7 8 8 8 [torch.FloatTensor of size (1,1,7,7)] >>> # using different paddings >>> m = nn.ReplicationPad2d((1, 1, 2, 0)) >>> m(input) (0 ,0 ,.,.) = 0 0 1 2 2 0 0 1 2 2 0 0 1 2 2 3 3 4 5 5 6 6 7 8 8 [torch.FloatTensor of size (1,1,5,5)]
ReplicationPad3d¶
-
class
torch.nn.
ReplicationPad3d
(padding)[source]¶ Pads the input tensor using replication of the input boundary.
For N`d-padding, use :func:`torch.nn.functional.pad().
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom, paddingFront, paddingBack) - Shape:
- Input: (N, C, D_{in}, H_{in}, W_{in})
- Output: (N, C, D_{out}, H_{out}, W_{out}) where D_{out} = D_{in} + \textit{paddingFront} + \textit{paddingBack} H_{out} = H_{in} + \textit{paddingTop} + \textit{paddingBottom} W_{out} = W_{in} + \textit{paddingLeft} + \textit{paddingRight}
Examples:
>>> m = nn.ReplicationPad3d(3) >>> input = torch.randn(16, 3, 8, 320, 480) >>> output = m(input) >>> # using different paddings >>> m = nn.ReplicationPad3d((3, 3, 6, 6, 1, 1)) >>> output = m(input)
ZeroPad2d¶
-
class
torch.nn.
ZeroPad2d
(padding)[source]¶ Pads the input tensor boundaries with zero.
For N`d-padding, use :func:`torch.nn.functional.pad().
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom) - Shape:
- Input: (N, C, H_{in}, W_{in})
- Output: (N, C, H_{out}, W_{out}) where H_{out} = H_{in} + \textit{paddingTop} + \textit{paddingBottom} W_{out} = W_{in} + \textit{paddingLeft} + \textit{paddingRight}
Examples:
>>> m = nn.ZeroPad2d(2) >>> input = torch.randn(1, 1, 3, 3) >>> input (0 ,0 ,.,.) = 1.4418 -1.9812 -0.3815 -0.3828 -0.6833 -0.2376 0.1433 0.0211 0.4311 [torch.FloatTensor of size (1,1,3,3)] >>> m(input) (0 ,0 ,.,.) = 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 1.4418 -1.9812 -0.3815 0.0000 0.0000 0.0000 0.0000 -0.3828 -0.6833 -0.2376 0.0000 0.0000 0.0000 0.0000 0.1433 0.0211 0.4311 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 [torch.FloatTensor of size (1,1,7,7)] >>> # using different paddings >>> m = nn.ZeroPad2d((1, 1, 2, 0)) >>> m(input) (0 ,0 ,.,.) = 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 1.4418 -1.9812 -0.3815 0.0000 0.0000 -0.3828 -0.6833 -0.2376 0.0000 0.0000 0.1433 0.0211 0.4311 0.0000 [torch.FloatTensor of size (1,1,5,5)]
ConstantPad1d¶
-
class
torch.nn.
ConstantPad1d
(padding, value)[source]¶ Pads the input tensor boundaries with a constant value.
For N`d-padding, use :func:`torch.nn.functional.pad().
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in both boundaries. If a 2-tuple, uses (paddingLeft, paddingRight) - Shape:
- Input: (N, C, W_{in})
- Output: (N, C, W_{out}) where W_{out} = W_{in} + \textit{paddingLeft} + \textit{paddingRight}
Examples:
>>> m = nn.ConstantPad1d(2, 3.5) >>> input = torch.randn(1, 2, 4) >>> input (0 ,.,.) = 0.1875 0.5046 -1.0074 2.0005 -0.3540 -1.8645 1.1530 0.0632 [torch.FloatTensor of size (1,2,4)] >>> m(input) (0 ,.,.) = 3.5000 3.5000 0.1875 0.5046 -1.0074 2.0005 3.5000 3.5000 3.5000 3.5000 -0.3540 -1.8645 1.1530 0.0632 3.5000 3.5000 [torch.FloatTensor of size (1,2,8)] >>> # using different paddings >>> m = nn.ConstantPad1d((3, 1), 3.5) >>> m(input) (0 ,.,.) = 3.5000 3.5000 3.5000 0.1875 0.5046 -1.0074 2.0005 3.5000 3.5000 3.5000 3.5000 -0.3540 -1.8645 1.1530 0.0632 3.5000 [torch.FloatTensor of size (1,2,8)]
ConstantPad2d¶
-
class
torch.nn.
ConstantPad2d
(padding, value)[source]¶ Pads the input tensor boundaries with a constant value.
For N`d-padding, use :func:`torch.nn.functional.pad().
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom) - Shape:
- Input: (N, C, H_{in}, W_{in})
- Output: (N, C, H_{out}, W_{out}) where H_{out} = H_{in} + \textit{paddingTop} + \textit{paddingBottom} W_{out} = W_{in} + \textit{paddingLeft} + \textit{paddingRight}
Examples:
>>> m = nn.ConstantPad2d(2, 3.5) >>> input = torch.randn(1, 2, 2) >>> input (0 ,.,.) = -0.2295 -0.9774 -0.3335 -1.4178 [torch.FloatTensor of size (1,2,2)] >>> m(input) (0 ,.,.) = 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 -0.2295 -0.9774 3.5000 3.5000 3.5000 3.5000 -0.3335 -1.4178 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 [torch.FloatTensor of size (1,6,6)] >>> # using different paddings >>> m = nn.ConstantPad2d((3, 0, 2, 1), 3.5) >>> m(input) (0 ,.,.) = 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 3.5000 -0.2295 -0.9774 3.5000 3.5000 3.5000 -0.3335 -1.4178 3.5000 3.5000 3.5000 3.5000 3.5000 [torch.FloatTensor of size (1,5,5)]
ConstantPad3d¶
-
class
torch.nn.
ConstantPad3d
(padding, value)[source]¶ Pads the input tensor boundaries with a constant value.
For N`d-padding, use :func:`torch.nn.functional.pad().
Parameters: padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom, paddingFront, paddingBack) - Shape:
- Input: (N, C, D_{in}, H_{in}, W_{in})
- Output: (N, C, D_{out}, H_{out}, W_{out}) where D_{out} = D_{in} + \textit{paddingFront} + \textit{paddingBack} H_{out} = H_{in} + \textit{paddingTop} + \textit{paddingBottom} W_{out} = W_{in} + \textit{paddingLeft} + \textit{paddingRight}
Examples:
>>> m = nn.ConstantPad3d(3, 3.5) >>> input = torch.randn(16, 3, 10, 20, 30) >>> output = m(input) >>> # using different paddings >>> m = nn.ConstantPad3d((3, 3, 6, 6, 0, 1), 3.5) >>> output = m(input)
Non-linear activations (weighted sum, nonlinearity)¶
ELU¶
-
class
torch.nn.
ELU
(alpha=1.0, inplace=False)[source]¶ Applies element-wise, \text{ELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x) - 1))
Parameters: - alpha – the \alpha value for the ELU formulation. Default: 1.0
- inplace – can optionally do the operation in-place. Default:
False
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.ELU() >>> input = torch.randn(2) >>> output = m(input)
Hardshrink¶
-
class
torch.nn.
Hardshrink
(lambd=0.5)[source]¶ Applies the hard shrinkage function element-wise Hardshrink is defined as:
\begin{split}\text{HardShrink}(x) = \begin{cases} x, & \text{ if } x > \lambda \\ x, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}\end{split}Parameters: lambd – the \lambda value for the Hardshrink formulation. Default: 0.5 - Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.Hardshrink() >>> input = torch.randn(2) >>> output = m(input)
Hardtanh¶
-
class
torch.nn.
Hardtanh
(min_val=-1, max_val=1, inplace=False, min_value=None, max_value=None)[source]¶ Applies the HardTanh function element-wise
HardTanh is defined as:
\begin{split}\text{HardTanh}(x) = \begin{cases} 1 & \text{ if } x > 1 \\ -1 & \text{ if } x < -1 \\ x & \text{ otherwise } \\ \end{cases}\end{split}The range of the linear region [-1, 1] can be adjusted using
min_val
andmax_val
.Parameters: - min_val – minimum value of the linear region range. Default: -1
- max_val – maximum value of the linear region range. Default: 1
- inplace – can optionally do the operation in-place. Default:
False
Keyword arguments
min_value
andmax_value
have been deprecated in favor ofmin_val
andmax_val
.- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.Hardtanh(-2, 2) >>> input = torch.randn(2) >>> output = m(input)
LeakyReLU¶
-
class
torch.nn.
LeakyReLU
(negative_slope=0.01, inplace=False)[source]¶ Applies element-wise, \text{LeakyReLU}(x) = \max(0, x) + \text{negative_slope} * \min(0, x) or
\begin{split}\text{LeakyRELU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ \text{negative_slope} \times x, & \text{ otherwise } \end{cases}\end{split}Parameters: - negative_slope – Controls the angle of the negative slope. Default: 1e-2
- inplace – can optionally do the operation in-place. Default:
False
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.LeakyReLU(0.1) >>> input = torch.randn(2) >>> output = m(input)
LogSigmoid¶
-
class
torch.nn.
LogSigmoid
[source]¶ Applies element-wise \text{LogSigmoid}(x) = \log\left(\frac{ 1 }{ 1 + \exp(-x)}\right)
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.LogSigmoid() >>> input = torch.randn(2) >>> output = m(input)
PReLU¶
-
class
torch.nn.
PReLU
(num_parameters=1, init=0.25)[source]¶ Applies element-wise the function \text{PReLU}(x) = \max(0,x) + a * \min(0,x) or
\begin{split}\text{PReLU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ ax, & \text{ otherwise } \end{cases}\end{split}Here a is a learnable parameter. When called without arguments, nn.PReLU() uses a single parameter a across all input channels. If called with nn.PReLU(nChannels), a separate a is used for each input channel.
Note
weight decay should not be used when learning a for good performance.
Parameters: - num_parameters – number of a to learn. Default: 1
- init – the initial value of a. Default: 0.25
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.PReLU() >>> input = torch.randn(2) >>> output = m(input)
ReLU¶
-
class
torch.nn.
ReLU
(inplace=False)[source]¶ Applies the rectified linear unit function element-wise \text{ReLU}(x)= \max(0, x)
Parameters: inplace – can optionally do the operation in-place. Default: False
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.ReLU() >>> input = torch.randn(2) >>> output = m(input)
ReLU6¶
-
class
torch.nn.
ReLU6
(inplace=False)[source]¶ Applies the element-wise function \text{ReLU6}(x) = \min(\max(0,x), 6)
Parameters: inplace – can optionally do the operation in-place. Default: False
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.ReLU6() >>> input = torch.randn(2) >>> output = m(input)
RReLU¶
-
class
torch.nn.
RReLU
(lower=0.125, upper=0.3333333333333333, inplace=False)[source]¶ Applies the randomized leaky rectified liner unit function element-wise described in the paper Empirical Evaluation of Rectified Activations in Convolutional Network.
The function is defined as:
\begin{split}\text{RReLU}(x) = \begin{cases} x & \text{if } x \geq 0 \\ ax & \text{ otherwise } \end{cases},\end{split}where a is randomly sampled from uniform distribution \mathcal{U}(\text{lower}, \text{upper}).
Parameters: - lower – lower bound of the uniform distribution. Default: \frac{1}{8}
- upper – upper bound of the uniform distribution. Default: \frac{1}{3}
- inplace – can optionally do the operation in-place. Default:
False
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.RReLU(0.1, 0.3) >>> input = torch.randn(2) >>> output = m(input)
SELU¶
-
class
torch.nn.
SELU
(inplace=False)[source]¶ Applies element-wise, \text{SELU}(x) = \text{scale} * (\max(0,x) + \min(0, \alpha * (\exp(x) - 1))), with \alpha = 1.6732632423543772848170429916717 and \text{scale} = 1.0507009873554804934193349852946.
More details can be found in the paper Self-Normalizing Neural Networks .
Parameters: inplace (bool, optional) – can optionally do the operation in-place. Default: False
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.SELU() >>> input = torch.randn(2) >>> output = m(input)
Sigmoid¶
-
class
torch.nn.
Sigmoid
[source]¶ Applies the element-wise function \text{Sigmoid}(x) = \frac{1}{1 + \exp(-x)}
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.Sigmoid() >>> input = torch.randn(2) >>> output = m(input)
Softplus¶
-
class
torch.nn.
Softplus
(beta=1, threshold=20)[source]¶ Applies element-wise \text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x))
SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive.
For numerical stability the implementation reverts to the linear function for inputs above a certain value.
Parameters: - beta – the \beta value for the Softplus formulation. Default: 1
- threshold – values above this revert to a linear function. Default: 20
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.Softplus() >>> input = torch.randn(2) >>> output = m(input)
Softshrink¶
-
class
torch.nn.
Softshrink
(lambd=0.5)[source]¶ Applies the soft shrinkage function elementwise
SoftShrinkage function is defined as:
\begin{split}\text{SoftShrinkage}(x) = \begin{cases} x - \lambda, & \text{ if } x > \lambda \\ x + \lambda, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}\end{split}Parameters: lambd – the \lambda value for the Softshrink formulation. Default: 0.5 - Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.Softshrink() >>> input = torch.randn(2) >>> output = m(input)
Softsign¶
-
class
torch.nn.
Softsign
[source]¶ Applies element-wise, the function \text{SoftSign}(x) = \frac{x}{ 1 + |x|}
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.Softsign() >>> input = torch.randn(2) >>> output = m(input)
Tanh¶
-
class
torch.nn.
Tanh
[source]¶ Applies element-wise, \text{Tanh}(x) = \tanh(x) = \frac{e^x - e^{-x}} {e^x + e^{-x}}
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.Tanh() >>> input = torch.randn(2) >>> output = m(input)
Tanhshrink¶
Threshold¶
-
class
torch.nn.
Threshold
(threshold, value, inplace=False)[source]¶ Thresholds each element of the input Tensor
Threshold is defined as:
\begin{split}y = \begin{cases} x, &\text{ if } x > \text{threshold} \\ \text{value}, &\text{ otherwise } \end{cases}\end{split}Parameters: - threshold – The value to threshold at
- value – The value to replace with
- inplace – can optionally do the operation in-place. Default:
False
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = nn.Threshold(0.1, 20) >>> input = torch.randn(2) >>> output = m(input)
Non-linear activations (other)¶
Softmin¶
-
class
torch.nn.
Softmin
(dim=None)[source]¶ Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range (0, 1) and sum to 1
\text{Softmin}(x_{i}) = \frac{\exp(-x_i)}{\sum_j \exp(-x_j)}
- Shape:
- Input: any shape
- Output: same as input
Parameters: dim (int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1). Returns: a Tensor of the same dimension and shape as the input, with values in the range [0, 1] Examples:
>>> m = nn.Softmin() >>> input = torch.randn(2, 3) >>> output = m(input)
Softmax¶
-
class
torch.nn.
Softmax
(dim=None)[source]¶ Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range (0,1) and sum to 1
Softmax is defined as \text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}
- Shape:
- Input: any shape
- Output: same as input
Returns: a Tensor of the same dimension and shape as the input with values in the range [0, 1] Parameters: dim (int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1). Note
This module doesn’t work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use LogSoftmax instead (it’s faster and has better numerical properties).
Examples:
>>> m = nn.Softmax() >>> input = torch.randn(2, 3) >>> output = m(input)
Softmax2d¶
-
class
torch.nn.
Softmax2d
[source]¶ Applies SoftMax over features to each spatial location.
When given an image of
Channels x Height x Width
, it will apply Softmax to each location (Channels, h_i, w_j)- Shape:
- Input: (N, C, H, W)
- Output: (N, C, H, W) (same shape as input)
Returns: a Tensor of the same dimension and shape as the input with values in the range [0, 1] Examples:
>>> m = nn.Softmax2d() >>> # you softmax over the 2nd dimension >>> input = torch.randn(2, 3, 12, 13) >>> output = m(input)
LogSoftmax¶
-
class
torch.nn.
LogSoftmax
(dim=None)[source]¶ Applies the Log(Softmax(x)) function to an n-dimensional input Tensor. The LogSoftmax formulation can be simplified as
\text{LogSoftmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} \right)
- Shape:
- Input: any shape
- Output: same as input
Parameters: dim (int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1). Returns: a Tensor of the same dimension and shape as the input with values in the range [-inf, 0) Examples:
>>> m = nn.LogSoftmax() >>> input = torch.randn(2, 3) >>> output = m(input)
Normalization layers¶
BatchNorm1d¶
-
class
torch.nn.
BatchNorm1d
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)[source]¶ Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \betaThe mean and standard-deviation are calculated per-dimension over the mini-batches and \gamma and \beta are learnable parameter vectors of size C (where C is the input size).
By default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum
of 0.1.If
track_running_stats
is set toFalse
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momemtum} \times x_t, where \hat{x} is the estimated statistic and x_t is the new observed value.Because the Batch Normalization is done over the C dimension, computing statistics on (N, L) slices, it’s common terminology to call this Temporal Batch Normalization.
Parameters: - num_features – C from an expected input of size (N, C, L) or L from input of size (N, L)
- eps – a value added to the denominator for numerical stability. Default: 1e-5
- momentum – the value used for the running_mean and running_var computation. Default: 0.1
- affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
- track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
- Shape:
- Input: (N, C) or (N, C, L)
- Output: (N, C) or (N, C, L) (same shape as input)
Examples:
>>> # With Learnable Parameters >>> m = nn.BatchNorm1d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm1d(100, affine=False) >>> input = torch.randn(20, 100) >>> output = m(input)
BatchNorm2d¶
-
class
torch.nn.
BatchNorm2d
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)[source]¶ Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \betaThe mean and standard-deviation are calculated per-dimension over the mini-batches and \gamma and \beta are learnable parameter vectors of size C (where C is the input size).
By default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum
of 0.1.If
track_running_stats
is set toFalse
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momemtum} \times x_t, where \hat{x} is the estimated statistic and x_t is the new observed value.Because the Batch Normalization is done over the C dimension, computing statistics on (N, H, W) slices, it’s common terminology to call this Spatial Batch Normalization.
Parameters: - num_features – C from an expected input of size (N, C, H, W)
- eps – a value added to the denominator for numerical stability. Default: 1e-5
- momentum – the value used for the running_mean and running_var computation. Default: 0.1
- affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
- track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
- Shape:
- Input: (N, C, H, W)
- Output: (N, C, H, W) (same shape as input)
Examples:
>>> # With Learnable Parameters >>> m = nn.BatchNorm2d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm2d(100, affine=False) >>> input = torch.randn(20, 100, 35, 45) >>> output = m(input)
BatchNorm3d¶
-
class
torch.nn.
BatchNorm3d
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)[source]¶ Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \betaThe mean and standard-deviation are calculated per-dimension over the mini-batches and \gamma and \beta are learnable parameter vectors of size C (where C is the input size).
By default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum
of 0.1.If
track_running_stats
is set toFalse
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momemtum} \times x_t, where \hat{x} is the estimated statistic and x_t is the new observed value.Because the Batch Normalization is done over the C dimension, computing statistics on (N, D, H, W) slices, it’s common terminology to call this Volumetric Batch Normalization or Spatio-temporal Batch Normalization.
Parameters: - num_features – C from an expected input of size (N, C, D, H, W)
- eps – a value added to the denominator for numerical stability. Default: 1e-5
- momentum – the value used for the running_mean and running_var computation. Default: 0.1
- affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
- track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
- Shape:
- Input: (N, C, D, H, W)
- Output: (N, C, D, H, W) (same shape as input)
Examples:
>>> # With Learnable Parameters >>> m = nn.BatchNorm3d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm3d(100, affine=False) >>> input = torch.randn(20, 100, 35, 45, 10) >>> output = m(input)
InstanceNorm1d¶
-
class
torch.nn.
InstanceNorm1d
(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)[source]¶ Applies Instance Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization .
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x]} + \epsilon} * \gamma + \betaThe mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. \gamma and \beta are learnable parameter vectors of size C (where C is the input size) if
affine
isTrue
.By default, this layer uses instance statistics computed from input data in both training and evaluation modes.
If
track_running_stats
is set toTrue
, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a defaultmomentum
of 0.1.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momemtum} \times x_t, where \hat{x} is the estimated statistic and x_t is the new observed value.Parameters: - num_features – C from an expected input of size (N, C, L) or L from input of size (N, L)
- eps – a value added to the denominator for numerical stability. Default: 1e-5
- momentum – the value used for the running_mean and running_var computation. Default: 0.1
- affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
- track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
- Shape:
- Input: (N, C, L)
- Output: (N, C, L) (same shape as input)
Examples:
>>> # Without Learnable Parameters >>> m = nn.InstanceNorm1d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm1d(100, affine=True) >>> input = torch.randn(20, 100, 40) >>> output = m(input)
InstanceNorm2d¶
-
class
torch.nn.
InstanceNorm2d
(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)[source]¶ Applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization .
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x]} + \epsilon} * \gamma + \betaThe mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. \gamma and \beta are learnable parameter vectors of size C (where C is the input size) if
affine
isTrue
.By default, this layer uses instance statistics computed from input data in both training and evaluation modes.
If
track_running_stats
is set toTrue
, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a defaultmomentum
of 0.1.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momemtum} \times x_t, where \hat{x} is the estimated statistic and x_t is the new observed value.Parameters: - num_features – C from an expected input of size (N, C, H, W)
- eps – a value added to the denominator for numerical stability. Default: 1e-5
- momentum – the value used for the running_mean and running_var computation. Default: 0.1
- affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
- track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
- Shape:
- Input: (N, C, H, W)
- Output: (N, C, H, W) (same shape as input)
Examples:
>>> # Without Learnable Parameters >>> m = nn.InstanceNorm2d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm2d(100, affine=True) >>> input = torch.randn(20, 100, 35, 45) >>> output = m(input)
InstanceNorm3d¶
-
class
torch.nn.
InstanceNorm3d
(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)[source]¶ Applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization .
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x]} + \epsilon} * \gamma + \betaThe mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. \gamma and \beta are learnable parameter vectors of size C (where C is the input size) if
affine
isTrue
.By default, this layer uses instance statistics computed from input data in both training and evaluation modes.
If
track_running_stats
is set toTrue
, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a defaultmomentum
of 0.1.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momemtum} \times x_t, where \hat{x} is the estimated statistic and x_t is the new observed value.Parameters: - num_features – C from an expected input of size (N, C, D, H, W)
- eps – a value added to the denominator for numerical stability. Default: 1e-5
- momentum – the value used for the running_mean and running_var computation. Default: 0.1
- affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
- track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
- Shape:
- Input: (N, C, D, H, W)
- Output: (N, C, D, H, W) (same shape as input)
Examples:
>>> # Without Learnable Parameters >>> m = nn.InstanceNorm3d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm3d(100, affine=True) >>> input = torch.randn(20, 100, 35, 45, 10) >>> output = m(input)
LayerNorm¶
-
class
torch.nn.
LayerNorm
(normalized_shape, eps=1e-05, elementwise_affine=True)[source]¶ Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization .
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x]} + \epsilon} * \gamma + \betaThe mean and standard-deviation are calculated separately over the last certain number dimensions with shape specified by
normalized_shape
. \gamma and \beta are learnable affine transform parameters ofnormalized_shape
ifelementwise_affine
isTrue
.Note
Unlike Batch Normalization and Instance Normalization, which applies scalar scale and bias for each entire channel/plane with the
affine
option, Layer Normalization applies per-element scale and bias withelementwise_affine
.This layer uses statistics computed from input data in both training and evaluation modes.
Parameters: - normalized_shape (int or list or torch.Size) –
input shape from an expected input of size
[* \times \text{normalized_shape}[0] \times \text{normalized_shape}[1] \times \ldots \times \text{normalized_shape}[-1]]If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension with that specific size.
- eps – a value added to the denominator for numerical stability. Default: 1e-5
- elementwise_affine – a boolean value that when set to
True
, this module has learnable per-element affine parameters. Default:True
- Shape:
- Input: (N, *)
- Output: (N, *) (same shape as input)
Examples:
>>> input = torch.randn(20, 5, 10, 10) >>> # With Learnable Parameters >>> m = nn.LayerNorm(input.size()[1:]) >>> # Without Learnable Parameters >>> m = nn.LayerNorm(input.size()[1:], elementwise_affine=False) >>> # Normalize over last two dimensions >>> m = nn.LayerNorm([10, 10]) >>> # Normalize over last dimension of size 10 >>> m = nn.LayerNorm(10) >>> # Activating the module >>> output = m(input)
- normalized_shape (int or list or torch.Size) –
LocalResponseNorm¶
-
class
torch.nn.
LocalResponseNorm
(size, alpha=0.0001, beta=0.75, k=1)[source]¶ Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. Applies normalization across channels.
b_{c} = a_{c}\left(k + \frac{\alpha}{n} \sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta}Parameters: - size – amount of neighbouring channels used for normalization
- alpha – multiplicative factor. Default: 0.0001
- beta – exponent. Default: 0.75
- k – additive factor. Default: 1
- Shape:
- Input: (N, C, ...)
- Output: (N, C, ...) (same shape as input)
Examples:
>>> lrn = nn.LocalResponseNorm(2) >>> signal_2d = torch.randn(32, 5, 24, 24) >>> signal_4d = torch.randn(16, 5, 7, 7, 7, 7) >>> output_2d = lrn(signal_2d) >>> output_4d = lrn(signal_4d)
Recurrent layers¶
RNN¶
-
class
torch.nn.
RNN
(*args, **kwargs)[source]¶ Applies a multi-layer Elman RNN with tanh or ReLU non-linearity to an input sequence.
For each element in the input sequence, each layer computes the following function:
h_t = \tanh(w_{ih} x_t + b_{ih} + w_{hh} h_{(t-1)} + b_{hh})where h_t is the hidden state at time t, x_t is the input at time t, and h_{(t-1)} is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0. If
nonlinearity`='relu', then `ReLU
is used instead of tanh.Parameters: - input_size – The number of expected features in the input x
- hidden_size – The number of features in the hidden state h
- num_layers – Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1 - nonlinearity – The non-linearity to use. Can be either ‘tanh’ or ‘relu’. Default: ‘tanh’
- bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
- batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature) - dropout – If non-zero, introduces a Dropout layer on the outputs of each
RNN layer except the last layer, with dropout probability equal to
dropout
. Default: 0 - bidirectional – If
True
, becomes a bidirectional RNN. Default:False
- Inputs: input, h_0
- input of shape (seq_len, batch, input_size): tensor containing the features
of the input sequence. The input can also be a packed variable length
sequence. See
torch.nn.utils.rnn.pack_padded_sequence()
ortorch.nn.utils.rnn.pack_sequence()
for details. - h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.
- input of shape (seq_len, batch, input_size): tensor containing the features
of the input sequence. The input can also be a packed variable length
sequence. See
- Outputs: output, h_n
- output of shape (seq_len, batch, hidden_size * num_directions): tensor
containing the output features (h_k) from the last layer of the RNN,
for each k. If a
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence. - h_n (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for k = seq_len.
- output of shape (seq_len, batch, hidden_size * num_directions): tensor
containing the output features (h_k) from the last layer of the RNN,
for each k. If a
Variables: - weight_ih_l[k] – the learnable input-hidden weights of the k-th layer, of shape (hidden_size * input_size) for k = 0. Otherwise, the shape is (hidden_size * hidden_size)
- weight_hh_l[k] – the learnable hidden-hidden weights of the k-th layer, of shape (hidden_size * hidden_size)
- bias_ih_l[k] – the learnable input-hidden bias of the k-th layer, of shape (hidden_size)
- bias_hh_l[k] – the learnable hidden-hidden bias of the k-th layer, of shape (hidden_size)
Examples:
>>> rnn = nn.RNN(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> output, hn = rnn(input, h0)
LSTM¶
-
class
torch.nn.
LSTM
(*args, **kwargs)[source]¶ Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.
For each element in the input sequence, each layer computes the following function:
\begin{split}\begin{array}{ll} i_t = \sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) \\ f_t = \sigma(W_{if} x_t + b_{if} + W_{hf} h_{(t-1)} + b_{hf}) \\ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{(t-1)} + b_{hg}) \\ o_t = \sigma(W_{io} x_t + b_{io} + W_{ho} h_{(t-1)} + b_{ho}) \\ c_t = f_t c_{(t-1)} + i_t g_t \\ h_t = o_t \tanh(c_t) \end{array}\end{split}where h_t is the hidden state at time t, c_t is the cell state at time t, x_t is the input at time t, h_{(t-1)} is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0, and i_t, f_t, g_t, o_t are the input, forget, cell, and output gates, respectively. \sigma is the sigmoid function.
Parameters: - input_size – The number of expected features in the input x
- hidden_size – The number of features in the hidden state h
- num_layers – Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1 - bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
- batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature) - dropout – If non-zero, introduces a Dropout layer on the outputs of each
LSTM layer except the last layer, with dropout probability equal to
dropout
. Default: 0 - bidirectional – If
True
, becomes a bidirectional LSTM. Default:False
- Inputs: input, (h_0, c_0)
input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See
torch.nn.utils.rnn.pack_padded_sequence()
ortorch.nn.utils.rnn.pack_sequence()
for details.h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch.
c_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial cell state for each element in the batch.
If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.
- Outputs: output, (h_n, c_n)
- output of shape (seq_len, batch, hidden_size * num_directions): tensor
containing the output features (h_t) from the last layer of the LSTM,
for each t. If a
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence. - h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len
- c_n (num_layers * num_directions, batch, hidden_size): tensor containing the cell state for t = seq_len
- output of shape (seq_len, batch, hidden_size * num_directions): tensor
containing the output features (h_t) from the last layer of the LSTM,
for each t. If a
Variables: - weight_ih_l[k] – the learnable input-hidden weights of the \text{k}^{th} layer (W_ii|W_if|W_ig|W_io), of shape (4*hidden_size x input_size)
- weight_hh_l[k] – the learnable hidden-hidden weights of the \text{k}^{th} layer (W_hi|W_hf|W_hg|W_ho), of shape (4*hidden_size x hidden_size)
- bias_ih_l[k] – the learnable input-hidden bias of the \text{k}^{th} layer (b_ii|b_if|b_ig|b_io), of shape (4*hidden_size)
- bias_hh_l[k] – the learnable hidden-hidden bias of the \text{k}^{th} layer (b_hi|b_hf|b_hg|b_ho), of shape (4*hidden_size)
Examples:
>>> rnn = nn.LSTM(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> c0 = torch.randn(2, 3, 20) >>> output, hn = rnn(input, (h0, c0))
GRU¶
-
class
torch.nn.
GRU
(*args, **kwargs)[source]¶ Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.
For each element in the input sequence, each layer computes the following function:
\begin{split}\begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) n_t + z_t h_{(t-1)} \\ \end{array}\end{split}where h_t is the hidden state at time t, x_t is the input at time t, h_{(t-1)} is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0, and r_t, z_t, n_t are the reset, update, and new gates, respectively. \sigma is the sigmoid function.
Parameters: - input_size – The number of expected features in the input x
- hidden_size – The number of features in the hidden state h
- num_layers – Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1 - bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
- batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature) - dropout – If non-zero, introduces a Dropout layer on the outputs of each
GRU layer except the last layer, with dropout probability equal to
dropout
. Default: 0 - bidirectional – If
True
, becomes a bidirectional GRU. Default:False
- Inputs: input, h_0
- input of shape (seq_len, batch, input_size): tensor containing the features
of the input sequence. The input can also be a packed variable length
sequence. See
torch.nn.utils.rnn.pack_padded_sequence()
for details. - h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.
- input of shape (seq_len, batch, input_size): tensor containing the features
of the input sequence. The input can also be a packed variable length
sequence. See
- Outputs: output, h_n
- output of shape (seq_len, batch, hidden_size * num_directions): tensor
containing the output features h_t from the last layer of the GRU,
for each t. If a
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence. - h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len
- output of shape (seq_len, batch, hidden_size * num_directions): tensor
containing the output features h_t from the last layer of the GRU,
for each t. If a
Variables: - weight_ih_l[k] – the learnable input-hidden weights of the \text{k}^{th} layer (W_ir|W_iz|W_in), of shape (3*hidden_size x input_size)
- weight_hh_l[k] – the learnable hidden-hidden weights of the \text{k}^{th} layer (W_hr|W_hz|W_hn), of shape (3*hidden_size x hidden_size)
- bias_ih_l[k] – the learnable input-hidden bias of the \text{k}^{th} layer (b_ir|b_iz|b_in), of shape (3*hidden_size)
- bias_hh_l[k] – the learnable hidden-hidden bias of the \text{k}^{th} layer (b_hr|b_hz|b_hn), of shape (3*hidden_size)
Examples:
>>> rnn = nn.GRU(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> output, hn = rnn(input, h0)
RNNCell¶
-
class
torch.nn.
RNNCell
(input_size, hidden_size, bias=True, nonlinearity='tanh')[source]¶ An Elman RNN cell with tanh or ReLU non-linearity.
h' = \tanh(w_{ih} x + b_{ih} + w_{hh} h + b_{hh})If :attr:`nonlinearity`=’relu’, then ReLU is used in place of tanh.
Parameters: - input_size – The number of expected features in the input x
- hidden_size – The number of features in the hidden state h
- bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
- nonlinearity – The non-linearity to use. Can be either ‘tanh’ or ‘relu’. Default: ‘tanh’
- Inputs: input, hidden
- input of shape (batch, input_size): tensor containing input features
- hidden of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.
- Outputs: h’
- h’ of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch
Variables: - weight_ih – the learnable input-hidden weights, of shape (input_size x hidden_size)
- weight_hh – the learnable hidden-hidden weights, of shape (hidden_size x hidden_size)
- bias_ih – the learnable input-hidden bias, of shape (hidden_size)
- bias_hh – the learnable hidden-hidden bias, of shape (hidden_size)
Examples:
>>> rnn = nn.RNNCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx = rnn(input[i], hx) output.append(hx)
LSTMCell¶
-
class
torch.nn.
LSTMCell
(input_size, hidden_size, bias=True)[source]¶ A long short-term memory (LSTM) cell.
\begin{split}\begin{array}{ll} i = \sigma(W_{ii} x + b_{ii} + W_{hi} h + b_{hi}) \\ f = \sigma(W_{if} x + b_{if} + W_{hf} h + b_{hf}) \\ g = \tanh(W_{ig} x + b_{ig} + W_{hc} h + b_{hg}) \\ o = \sigma(W_{io} x + b_{io} + W_{ho} h + b_{ho}) \\ c' = f * c + i * g \\ h' = o \tanh(c') \\ \end{array}\end{split}where \sigma is the sigmoid function.
Parameters: - input_size – The number of expected features in the input x
- hidden_size – The number of features in the hidden state h
- bias – If False, then the layer does not use bias weights b_ih and
b_hh. Default:
True
- Inputs: input, (h_0, c_0)
input of shape (batch, input_size): tensor containing input features
h_0 of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch.
c_0 of shape (batch, hidden_size): tensor containing the initial cell state for each element in the batch.
If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.
- Outputs: h_1, c_1
- h_1 of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch
- c_1 of shape (batch, hidden_size): tensor containing the next cell state for each element in the batch
Variables: - weight_ih – the learnable input-hidden weights, of shape (4*hidden_size x input_size)
- weight_hh – the learnable hidden-hidden weights, of shape (4*hidden_size x hidden_size)
- bias_ih – the learnable input-hidden bias, of shape (4*hidden_size)
- bias_hh – the learnable hidden-hidden bias, of shape (4*hidden_size)
Examples:
>>> rnn = nn.LSTMCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> cx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx, cx = rnn(input[i], (hx, cx)) output.append(hx)
GRUCell¶
-
class
torch.nn.
GRUCell
(input_size, hidden_size, bias=True)[source]¶ A gated recurrent unit (GRU) cell
\begin{split}\begin{array}{ll} r = \sigma(W_{ir} x + b_{ir} + W_{hr} h + b_{hr}) \\ z = \sigma(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\ n = \tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn})) \\ h' = (1 - z) * n + z * h \end{array}\end{split}where \sigma is the sigmoid function.
Parameters: - input_size – The number of expected features in the input x
- hidden_size – The number of features in the hidden state h
- bias – If False, then the layer does not use bias weights b_ih and b_hh. Default: True
- Inputs: input, hidden
- input of shape (batch, input_size): tensor containing input features
- hidden of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.
- Outputs: h’
- h’ of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch
Variables: - weight_ih – the learnable input-hidden weights, of shape (3*hidden_size x input_size)
- weight_hh – the learnable hidden-hidden weights, of shape (3*hidden_size x hidden_size)
- bias_ih – the learnable input-hidden bias, of shape (3*hidden_size)
- bias_hh – the learnable hidden-hidden bias, of shape (3*hidden_size)
Examples:
>>> rnn = nn.GRUCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx = rnn(input[i], hx) output.append(hx)
Linear layers¶
Linear¶
-
class
torch.nn.
Linear
(in_features, out_features, bias=True)[source]¶ Applies a linear transformation to the incoming data: y = Ax + b
Parameters: - in_features – size of each input sample
- out_features – size of each output sample
- bias – If set to False, the layer will not learn an additive bias.
Default:
True
- Shape:
- Input: (N, *, in\_features) where * means any number of additional dimensions
- Output: (N, *, out\_features) where all but the last dimension are the same shape as the input.
Variables: - weight – the learnable weights of the module of shape (out_features x in_features)
- bias – the learnable bias of the module of shape (out_features)
Examples:
>>> m = nn.Linear(20, 30) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size())
Bilinear¶
-
class
torch.nn.
Bilinear
(in1_features, in2_features, out_features, bias=True)[source]¶ Applies a bilinear transformation to the incoming data: y = x_1 A x_2 + b
Parameters: - in1_features – size of each first input sample
- in2_features – size of each second input sample
- out_features – size of each output sample
- bias – If set to False, the layer will not learn an additive bias.
Default:
True
- Shape:
- Input: (N, *, \text{in1_features}), (N, *, \text{in2_features}) where * means any number of additional dimensions. All but the last dimension of the inputs should be the same.
- Output: (N, *, \text{out_features}) where all but the last dimension are the same shape as the input.
Variables: - weight – the learnable weights of the module of shape (out_features x in1_features x in2_features)
- bias – the learnable bias of the module of shape (out_features)
Examples:
>>> m = nn.Bilinear(20, 30, 40) >>> input1 = torch.randn(128, 20) >>> input2 = torch.randn(128, 30) >>> output = m(input1, input2) >>> print(output.size())
Dropout layers¶
Dropout¶
-
class
torch.nn.
Dropout
(p=0.5, inplace=False)[source]¶ During training, randomly zeroes some of the elements of the input tensor with probability
p
using samples from a Bernoulli distribution. The elements to zero are randomized on every forward call.This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper Improving neural networks by preventing co-adaptation of feature detectors .
Furthermore, the outputs are scaled by a factor of \frac{1}{1-p} during training. This means that during evaluation the module simply computes an identity function.
Parameters: - p – probability of an element to be zeroed. Default: 0.5
- inplace – If set to
True
, will do this operation in-place. Default:False
- Shape:
- Input: Any. Input can be of any shape
- Output: Same. Output is of the same shape as input
Examples:
>>> m = nn.Dropout(p=0.2) >>> input = torch.randn(20, 16) >>> output = m(input)
Dropout2d¶
-
class
torch.nn.
Dropout2d
(p=0.5, inplace=False)[source]¶ Randomly zeroes whole channels of the input tensor. The channels to zero-out are randomized on every forward call.
Usually the input comes from
nn.Conv2d
modules.As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case,
nn.Dropout2d()
will help promote independence between feature maps and should be used instead.Parameters: - Shape:
- Input: (N, C, H, W)
- Output: (N, C, H, W) (same shape as input)
Examples:
>>> m = nn.Dropout2d(p=0.2) >>> input = torch.randn(20, 16, 32, 32) >>> output = m(input)
Dropout3d¶
-
class
torch.nn.
Dropout3d
(p=0.5, inplace=False)[source]¶ Randomly zeroes whole channels of the input tensor. The channels to zero are randomized on every forward call.
Usually the input comes from
nn.Conv3d
modules.As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case,
nn.Dropout3d()
will help promote independence between feature maps and should be used instead.Parameters: - Shape:
- Input: (N, C, D, H, W)
- Output: (N, C, D, H, W) (same shape as input)
Examples:
>>> m = nn.Dropout3d(p=0.2) >>> input = torch.randn(20, 16, 4, 32, 32) >>> output = m(input)
AlphaDropout¶
-
class
torch.nn.
AlphaDropout
(p=0.5)[source]¶ Applies Alpha Dropout over the input.
Alpha Dropout is a type of Dropout that maintains the self-normalizing property. For an input with zero mean and unit standard deviation, the output of Alpha Dropout maintains the original mean and standard deviation of the input. Alpha Dropout goes hand-in-hand with SELU activation function, which ensures that the outputs have zero mean and unit standard deviation.
During training, it randomly masks some of the elements of the input tensor with probability p using samples from a bernoulli distribution. The elements to masked are randomized on every forward call, and scaled and shifted to maintain zero mean and unit standard deviation.
During evaluation the module simply computes an identity function.
More details can be found in the paper Self-Normalizing Neural Networks .
Parameters: p (float) – probability of an element to be dropped. Default: 0.5 - Shape:
- Input: Any. Input can be of any shape
- Output: Same. Output is of the same shape as input
Examples:
>>> m = nn.AlphaDropout(p=0.2) >>> input = torch.randn(20, 16) >>> output = m(input)
Sparse layers¶
Embedding¶
-
class
torch.nn.
Embedding
(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, sparse=False, _weight=None)[source]¶ A simple lookup table that stores embeddings of a fixed dictionary and size.
This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.
Parameters: - num_embeddings (int) – size of the dictionary of embeddings
- embedding_dim (int) – the size of each embedding vector
- padding_idx (int, optional) – If given, pads the output with the embedding vector at
padding_idx
(initialized to zeros) whenever it encounters the index. - max_norm (float, optional) – If given, will renormalize the embeddings to always have a norm lesser than this
- norm_type (float, optional) – The p of the p-norm to compute for the max_norm option
- scale_grad_by_freq (bool, optional) – if given, this will scale gradients by the frequency of the words in the mini-batch.
- sparse (bool, optional) – if
True
, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for more details regarding sparse gradients.
Variables: weight (Tensor) – the learnable weights of the module of shape (num_embeddings, embedding_dim)
- Shape:
- Input: LongTensor of arbitrary shape containing the indices to extract
- Output: (*, embedding_dim), where * is the input shape
Note
Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s
optim.SGD
(CUDA and CPU),optim.SparseAdam
(CUDA and CPU) andoptim.Adagrad
(CPU)Note
With
padding_idx
set, the embedding vector atpadding_idx
is initialized to all zeros. However, note that this vector can be modified afterwards, e.g., using a customized initialization method, and thus changing the vector used to pad the output. The gradient for this vector fromEmbedding
is always zero.Examples:
>>> # an Embedding module containing 10 tensors of size 3 >>> embedding = nn.Embedding(10, 3) >>> # a batch of 2 samples of 4 indices each >>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) >>> embedding(input) tensor([[[-0.0251, -1.6902, 0.7172], [-0.6431, 0.0748, 0.6969], [ 1.4970, 1.3448, -0.9685], [-0.3677, -2.7265, -0.1685]], [[ 1.4970, 1.3448, -0.9685], [ 0.4362, -0.4004, 0.9400], [-0.6431, 0.0748, 0.6969], [ 0.9124, -2.3616, 1.1151]]]) >>> # example with padding_idx >>> embedding = nn.Embedding(10, 3, padding_idx=0) >>> input = torch.LongTensor([[0,2,0,5]]) >>> embedding(input) tensor([[[ 0.0000, 0.0000, 0.0000], [ 0.1535, -2.0309, 0.9315], [ 0.0000, 0.0000, 0.0000], [-0.1655, 0.9897, 0.0635]]])
-
classmethod
from_pretrained
(embeddings, freeze=True)[source]¶ Creates Embedding instance from given 2-dimensional FloatTensor.
Parameters: - embeddings (Tensor) – FloatTensor containing weights for the Embedding. First dimension is being passed to Embedding as ‘num_embeddings’, second as ‘embedding_dim’.
- freeze (boolean, optional) – If
True
, the tensor does not get updated in the learning process. Equivalent toembedding.weight.requires_grad = False
. Default:True
Examples:
>>> # FloatTensor containing pretrained weights >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) >>> embedding = nn.Embedding.from_pretrained(weight) >>> # Get embeddings for index 1 >>> input = torch.LongTensor([1]) >>> embedding(input) tensor([[ 4.0000, 5.1000, 6.3000]])
EmbeddingBag¶
-
class
torch.nn.
EmbeddingBag
(num_embeddings, embedding_dim, max_norm=None, norm_type=2, scale_grad_by_freq=False, mode='mean', sparse=False)[source]¶ Computes sums or means of ‘bags’ of embeddings, without instantiating the intermediate embeddings.
- For bags of constant length,
- nn.EmbeddingBag with mode=sum is equivalent to nn.Embedding followed by torch.sum(dim=1)
- with mode=mean is equivalent to nn.Embedding followed by torch.mean(dim=1)
However, nn.EmbeddingBag is much more time and memory efficient than using a chain of these operations.
Parameters: - num_embeddings (int) – size of the dictionary of embeddings
- embedding_dim (int) – the size of each embedding vector
- max_norm (float, optional) – If given, will renormalize the embeddings to always have a norm lesser than this
- norm_type (float, optional) – The p of the p-norm to compute for the max_norm option
- scale_grad_by_freq (bool, optional) – if given, this will scale gradients by the frequency of the words in the dictionary.
- mode (string, optional) – ‘sum’ | ‘mean’. Specifies the way to reduce the bag. Default: ‘mean’
- sparse (bool, optional) – if
True
, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for more details regarding sparse gradients.
Variables: weight (Tensor) – the learnable weights of the module of shape (num_embeddings, embedding_dim)
- Inputs: input, offsets
- input (
N
orB x N
): LongTensor containing the indices of the embeddings - to extract. When input is 1D Tensor of shape N, an offsets Tensor is given, that contains the starting position of each new sequence in the mini-batch.
- input (
- offsets (
B
orNone
): LongTensor containing the starting positions of - each sample in a mini-batch of variable length
sequences. If input is 2D (
B x N
), then offsets does not need to be given, as the input is treated as a mini-batch of fixed length sequences of length N each.
- offsets (
- Shape:
- Input: LongTensor N, N = number of embeddings to extract
- (or) LongTensor
B x N
, B = number of sequences in mini-batch, - N = number of embeddings per sequence
- (or) LongTensor
- Offsets: LongTensor B, B = number of bags. The values are the
- offsets in input for each bag, i.e. the cumsum of lengths.
Offsets is not given if Input is 2D
B x N
Tensor, the input is considered to be of fixed-length sequences
- Output: (B, embedding_dim)
Examples:
>>> # an Embedding module containing 10 tensors of size 3 >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum') >>> # a batch of 2 samples of 4 indices each >>> input = torch.LongTensor([1,2,4,5,4,3,2,9]) >>> offsets = torch.LongTensor([0,4]) >>> embedding_sum(input, offsets) tensor([[-0.8861, -5.4350, -0.0523], [ 1.1306, -2.5798, -1.0044]])
Distance functions¶
CosineSimilarity¶
-
class
torch.nn.
CosineSimilarity
(dim=1, eps=1e-08)[source]¶ Returns cosine similarity between x_1 and x_2, computed along dim.
\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}Parameters: - Shape:
- Input1: (\ast_1, D, \ast_2) where D is at position dim
- Input2: (\ast_1, D, \ast_2), same shape as the Input1
- Output: (\ast_1, \ast_2)
Examples:
>>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> cos = nn.CosineSimilarity(dim=1, eps=1e-6) >>> output = cos(input1, input2)
PairwiseDistance¶
-
class
torch.nn.
PairwiseDistance
(p=2, eps=1e-06, keepdim=False)[source]¶ Computes the batchwise pairwise distance between vectors v_1,:math:v_2 using the p-norm:
\Vert x \Vert _p := \left( \sum_{i=1}^n \vert x_i \vert ^ p \right) ^ {1/p}Parameters: - Shape:
- Input1: (N, D) where D = vector dimension
- Input2: (N, D), same shape as the Input1
- Output: (N). If
keepdim
isFalse
, then (N, 1).
Examples:
>>> pdist = nn.PairwiseDistance(p=2) >>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> output = pdist(input1, input2)
Loss functions¶
L1Loss¶
-
class
torch.nn.
L1Loss
(size_average=True, reduce=True)[source]¶ Creates a criterion that measures the mean absolute value of the element-wise difference between input x and target y:
The loss can be described as:
\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left| x_n - y_n \right|,where N is the batch size. If reduce is
True
, then:\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if}\; \text{size_average} = \text{True},\\ \operatorname{sum}(L), & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}x and y arbitrary shapes with a total of n elements each.
The sum operation still operates over all the elements, and divides by n.
The division by n can be avoided if one sets the constructor argument size_average=False.
Parameters: - size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
False
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed
for each minibatch. When reduce is
False
, the loss function returns a loss per input/target element instead and ignores size_average. Default:True
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Target: (N, *), same shape as the input
- Output: scalar. If reduce is
False
, then (N, *), same shape as the input
Examples:
>>> loss = nn.L1Loss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward()
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
MSELoss¶
-
class
torch.nn.
MSELoss
(size_average=True, reduce=True)[source]¶ Creates a criterion that measures the mean squared error between n elements in the input x and target y.
The loss can be described as:
\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left( x_n - y_n \right)^2,where N is the batch size. If reduce is
True
, then:\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if}\; \text{size_average} = \text{True},\\ \operatorname{sum}(L), & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}The sum operation still operates over all the elements, and divides by n.
The division by n can be avoided if one sets
size_average
toFalse
.To get a batch of losses, a loss per batch element, set reduce to
False
. These losses are not averaged and are not affected by size_average.Parameters: - size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
False
, the losses are instead summed for each minibatch. Only applies when reduce isTrue
. Default:True
- reduce (bool, optional) – By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average. When reduce is
False
, returns a loss per input/target element instead and ignores size_average. Default:True
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Target: (N, *), same shape as the input
Examples:
>>> loss = nn.MSELoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward()
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
CrossEntropyLoss¶
-
class
torch.nn.
CrossEntropyLoss
(weight=None, size_average=True, ignore_index=-100, reduce=True)[source]¶ This criterion combines
nn.LogSoftmax()
andnn.NLLLoss()
in one single class.It is useful when training a classification problem with C classes. If provided, the optional argument
weight
should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.The input is expected to contain scores for each class.
input has to be a Tensor of size either (minibatch, C) or (minibatch, C, d_1, d_2, ..., d_K) with K \geq 2 for the K-dimensional case (described later).
This criterion expects a class index (0 to C-1) as the target for each value of a 1D tensor of size minibatch
The loss can be described as:
\text{loss}(x, class) = -\log\left(\frac{\exp(x[class])}{\sum_j \exp(x[j])}\right) = -x[class] + \log\left(\sum_j \exp(x[j])\right)or in the case of the weight argument being specified:
\text{loss}(x, class) = weight[class] \left(-x[class] + \log\left(\sum_j \exp(x[j])\right)\right)The losses are averaged across observations for each minibatch.
Can also be used for higher dimension inputs, such as 2D images, by providing an input of size (minibatch, C, d_1, d_2, ..., d_K) with K \geq 2, where K is the number of dimensions, and a target of appropriate shape (see below).
Parameters: - weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
- size_average (bool, optional) – By default, the losses are averaged over observations for each minibatch.
However, if the field size_average is set to
False
, the losses are instead summed for each minibatch. Ignored if reduce isFalse
. - ignore_index (int, optional) – Specifies a target value that is ignored
and does not contribute to the input gradient. When size_average is
True
, the loss is averaged over non-ignored targets. - reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on size_average. When reduce
is
False
, returns a loss per batch instead and ignores size_average. Default:True
- Shape:
- Input: (N, C) where C = number of classes, or
- (N, C, d_1, d_2, ..., d_K) with K \geq 2 in the case of K-dimensional loss.
- Target: (N) where each value is 0 \leq \text{targets}[i] \leq C-1, or
- (N, d_1, d_2, ..., d_K) with K \geq 2 in the case of K-dimensional loss.
- Output: scalar. If reduce is
False
, then the same size - as the target: (N), or (N, d_1, d_2, ..., d_K) with K \geq 2 in the case of K-dimensional loss.
- Output: scalar. If reduce is
Examples:
>>> loss = nn.CrossEntropyLoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.empty(3, dtype=torch.long).random_(5) >>> output = loss(input, target) >>> output.backward()
NLLLoss¶
-
class
torch.nn.
NLLLoss
(weight=None, size_average=True, ignore_index=-100, reduce=True)[source]¶ The negative log likelihood loss. It is useful to train a classification problem with C classes.
If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.
The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either (minibatch, C) or (minibatch, C, d_1, d_2, ..., d_K) with K \geq 2 for the K-dimensional case (described later).
Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
The target that this loss expects is a class index (0 to C-1, where C = number of classes)
If
reduce
isFalse
, the loss can be described as:\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} x_{n,y_n}, \quad w_{c} = \text{weight}[c] \cdot \mathbb{1}\{c \not= \text{ignore_index}\},where N is the batch size. If
reduce
isTrue
(default), then\begin{split}\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n}} l_n, & \text{if}\; \text{size_average} = \text{True},\\ \sum_{n=1}^N l_n, & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}Can also be used for higher dimension inputs, such as 2D images, by providing an input of size (minibatch, C, d_1, d_2, ..., d_K) with K \geq 2, where K is the number of dimensions, and a target of appropriate shape (see below). In the case of images, it computes NLL loss per-pixel.
Parameters: - weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch with weights set by
weight
. However, if the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
- ignore_index (int, optional) – Specifies a target value that is ignored
and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets. - reduce (bool, optional) – By default, the losses are averaged or summed
for each minibatch. When
reduce
isFalse
, the loss function returns a loss per batch instead and ignoressize_average
. Default:True
- Shape:
- Input: (N, C) where C = number of classes, or
- (N, C, d_1, d_2, ..., d_K) with K \geq 2 in the case of K-dimensional loss.
- Target: (N) where each value is 0 \leq \text{targets}[i] \leq C-1, or
- (N, d_1, d_2, ..., d_K) with K \geq 2 in the case of K-dimensional loss.
- Output: scalar. If reduce is
False
, then the same size - as the target: (N), or (N, d_1, d_2, ..., d_K) with K \geq 2 in the case of K-dimensional loss.
- Output: scalar. If reduce is
Examples:
>>> m = nn.LogSoftmax() >>> loss = nn.NLLLoss() >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = loss(m(input), target) >>> output.backward() >>> >>> >>> # 2D loss example (used, for example, with image inputs) >>> N, C = 5, 4 >>> loss = nn.NLLLoss() >>> # input is of size N x C x height x width >>> data = torch.randn(N, 16, 10, 10) >>> m = nn.Conv2d(16, C, (3, 3)) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor(N, 8, 8).random_(0, C) >>> output = loss(m(data), target) >>> output.backward()
PoissonNLLLoss¶
-
class
torch.nn.
PoissonNLLLoss
(log_input=True, full=False, size_average=True, eps=1e-08, reduce=True)[source]¶ Negative log likelihood loss with Poisson distribution of target.
The loss can be described as:
\begin{align}\begin{aligned}\text{target} \sim \mathrm{Poisson}(\text{input})\\\text{loss}(\text{input}, \text{target}) = \text{input} - \text{target} * \log(\text{input}) + \log(\text{target!})\end{aligned}\end{align}The last term can be omitted or approximated with Stirling formula. The approximation is used for target values more than 1. For targets less or equal to 1 zeros are added to the loss.
Parameters: - log_input (bool, optional) – if
True
the loss is computed as \exp(\text{input}) - \text{target}*\text{input}, ifFalse
the loss is \text{input} - \text{target}*\log(\text{input}+\text{eps}). - full (bool, optional) –
whether to compute full loss, i. e. to add the Stirling approximation term
\text{target}*\log(\text{target}) - \text{target} + 0.5 * \log(2\pi\text{target}). - size_average (bool, optional) – By default, the losses are averaged over
observations for each minibatch. However, if the field size_average
is set to
False
, the losses are instead summed for each minibatch. - eps (float, optional) – Small value to avoid evaluation of \log(0) when
log_input == False
. Default: 1e-8 - reduce (bool, optional) – By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average. When reduce is
False
, returns a loss per input/target element instead and ignores size_average. Default:True
Examples:
>>> loss = nn.PoissonNLLLoss() >>> log_input = torch.randn(5, 2, requires_grad=True) >>> target = torch.randn(5, 2) >>> output = loss(log_input, target) >>> output.backward()
- log_input (bool, optional) – if
KLDivLoss¶
-
class
torch.nn.
KLDivLoss
(size_average=True, reduce=True)[source]¶ The Kullback-Leibler divergence Loss
KL divergence is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely sampled) continuous output distributions.
As with NLLLoss, the input given is expected to contain log-probabilities, however unlike ClassNLLLoss, input is not restricted to a 2D Tensor, because the criterion is applied element-wise.
This criterion expects a target Tensor of the same size as the input Tensor.
The loss can be described as:
\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = y_n \odot \left( \log y_n - x_n \right),where N is the batch size. If reduce is
True
, then:\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if}\; \text{size_average} = \text{True},\\ \operatorname{sum}(L), & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}By default, the losses are averaged for each minibatch over observations as well as over dimensions. However, if the field size_average is set to
False
, the losses are instead summed.Parameters: - (bool, optional (size_average) – By default, the losses are averaged
for each minibatch over observations as well as over
dimensions. However, if
False
the losses are instead summed. - reduce (bool, optional) – By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average. When reduce is
False
, returns a loss per input/target element instead and ignores size_average. Default:True
- Shape:
- input: (N, *) where * means, any number of additional dimensions
- target: (N, *), same shape as the input
- output: scalar. If reduce is
True
, then (N, *), - same shape as the input
- output: scalar. If reduce is
- (bool, optional (size_average) – By default, the losses are averaged
for each minibatch over observations as well as over
dimensions. However, if
BCELoss¶
-
class
torch.nn.
BCELoss
(weight=None, size_average=True, reduce=True)[source]¶ Creates a criterion that measures the Binary Cross Entropy between the target and the output:
The loss can be described as:
\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right],where N is the batch size. If reduce is
True
, then\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if}\; \text{size_average} = \text{True},\\ \operatorname{sum}(L), & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets y should be numbers between 0 and 1.
Parameters: - weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size “nbatch”.
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
False
, the losses are instead summed for each minibatch. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per input/target element instead and ignores size_average. Default: True
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Target: (N, *), same shape as the input
- Output: scalar. If reduce is False, then (N, *), same shape as input.
Examples:
>>> m = nn.Sigmoid() >>> loss = nn.BCELoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(m(input), target) >>> output.backward()
BCEWithLogitsLoss¶
-
class
torch.nn.
BCEWithLogitsLoss
(weight=None, size_average=True, reduce=True)[source]¶ This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.
The loss can be described as:
\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ t_n \cdot \log \sigma(x_n) + (1 - t_n) \cdot \log (1 - \sigma(x_n)) \right],where N is the batch size. If reduce is
True
, then\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if}\; \text{size_average} = \text{True},\\ \operatorname{sum}(L), & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets t[i] should be numbers between 0 and 1.
Parameters: - weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size “nbatch”.
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to
False
, the losses are instead summed for each minibatch. Default:True
- reduce – By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per input/target element instead and ignores size_average. Default: True
MarginRankingLoss¶
-
class
torch.nn.
MarginRankingLoss
(margin=0, size_average=True, reduce=True)[source]¶ Creates a criterion that measures the loss given inputs x1, x2, two 1D mini-batch Tensor`s, and a label 1D mini-batch tensor `y with values (1 or -1).
If y == 1 then it assumed the first input should be ranked higher (have a larger value) than the second input, and vice-versa for y == -1.
The loss function for each sample in the mini-batch is:
\text{loss}(x, y) = \max(0, -y * (x1 - x2) + \text{margin})Parameters: - margin (float, optional) – Has a default value of 0.
- size_average (bool, optional) – By default, the losses are averaged over
observations for each minibatch. However, if the field
size_average
is set toFalse
, the losses are instead summed for each minibatch. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on
size_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
- Shape:
- Input: (N, D) where N is the batch size and D is the size of a sample.
- Target: (N)
- Output: scalar. If reduce is False, then (N).
HingeEmbeddingLoss¶
-
class
torch.nn.
HingeEmbeddingLoss
(margin=1.0, size_average=True, reduce=True)[source]¶ Measures the loss given an input tensor x and a labels tensor y containing values (1 or -1). This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance as x, and is typically used for learning nonlinear embeddings or semi-supervised learning:
The loss function for n-th sample in the mini-batch is:
\begin{split}l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, \Delta - x_n\}, & \text{if}\; y_n = -1, \end{cases}\end{split}and the total loss functions is
\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if}\; \text{size_average} = \text{True},\\ \operatorname{sum}(L), & \text{if}\; \text{size_average} = \text{False}. \end{cases}\end{split}where L = \{l_1,\dots,l_N\}^\top.
Parameters: - margin (float, optional) – Has a default value of 1.
- size_average (bool, optional) – By default, the losses are averaged over
observations for each minibatch. However, if the field
size_average
is set toFalse
, the losses are instead summed for each minibatch. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on
size_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
- Shape:
- Input: Tensor of arbitrary shape. The sum operation operates over all the elements.
- Target: Same shape as input.
- Output: scalar. If reduce is
False
, then same shape as the input
MultiLabelMarginLoss¶
-
class
torch.nn.
MultiLabelMarginLoss
(size_average=True, reduce=True)[source]¶ Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input x (a 2D mini-batch Tensor) and output y (which is a 2D Tensor of target class indices). For each sample in the mini-batch:
\text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)}where i == 0 to x.size(0), j == 0 to y.size(0), y[j] \geq 0, and i \neq y[j] for all i and j.
y and x must have the same size.
The criterion only considers a contiguous block of non-negative targets that starts at the front.
This allows for different samples to have variable amounts of target classes
Parameters: - size_average (bool, optional) – By default, the losses are averaged over
observations for each minibatch. However, if the field
size_average
is set toFalse
, the losses are instead summed for each minibatch. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on
size_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
- Shape:
- Input: (C) or (N, C) where N is the batch size and C is the number of classes.
- Target: (C) or (N, C), same shape as the input.
- Output: scalar. If reduce is False, then (N).
- size_average (bool, optional) – By default, the losses are averaged over
observations for each minibatch. However, if the field
SmoothL1Loss¶
-
class
torch.nn.
SmoothL1Loss
(size_average=True, reduce=True)[source]¶ Creates a criterion that uses a squared term if the absolute element-wise error falls below 1 and an L1 term otherwise. It is less sensitive to outliers than the MSELoss and in some cases prevents exploding gradients (e.g. see “Fast R-CNN” paper by Ross Girshick). Also known as the Huber loss:
\text{loss}(x, y) = \frac{1}{n} \sum_{i} z_{i}where z_{i} is given by:
\begin{split}z_{i} = \begin{cases} 0.5 (x_i - y_i)^2, & \text{if } |x_i - y_i| < 1 \\ |x_i - y_i| - 0.5, & \text{otherwise } \end{cases}\end{split}x and y arbitrary shapes with a total of n elements each the sum operation still operates over all the elements, and divides by n.
The division by n can be avoided if one sets
size_average
toFalse
Parameters: - size_average (bool, optional) – By default, the losses are averaged
over all elements. However, if the field size_average is set to
False
, the losses are instead summed. Ignored when reduce isFalse
. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed
over elements. When reduce is
False
, the loss function returns a loss per input/target element instead and ignores size_average. Default:True
- Shape:
- Input: (N, *) where * means, any number of additional dimensions
- Target: (N, *), same shape as the input
- Output: scalar. If reduce is
False
, then (N, *), same shape as the input
- size_average (bool, optional) – By default, the losses are averaged
over all elements. However, if the field size_average is set to
SoftMarginLoss¶
-
class
torch.nn.
SoftMarginLoss
(size_average=True, reduce=True)[source]¶ Creates a criterion that optimizes a two-class classification logistic loss between input tensor x and target tensor y (containing 1 or -1).
\text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()}Parameters: - size_average (bool, optional) – By default, the losses are averaged over
observations for each minibatch. However, if the field
size_average
is set toFalse
, the losses are instead summed for each minibatch. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on
size_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
- Shape:
- Input: Tensor of arbitrary shape.
- Target: Same shape as input.
- Output: scalar. If reduce is
False
, then same shape as the input
- size_average (bool, optional) – By default, the losses are averaged over
observations for each minibatch. However, if the field
MultiLabelSoftMarginLoss¶
-
class
torch.nn.
MultiLabelSoftMarginLoss
(weight=None, size_average=True, reduce=True)[source]¶ Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input x and target y of size (N, C). For each sample in the minibatch:
loss(x, y) = - \sum_i y[i] * \log((1 + \exp(-x[i]))^{-1}) + (1-y[i]) * \log\left(\frac{\exp(-x[i])}{(1 + \exp(-x[i]))}\right)where i == 0 to x.nElement()-1, y[i] in {0,1}.
Parameters: - weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
- size_average (bool, optional) – By default, the losses are averaged over
observations for each minibatch. However, if the field
size_average
is set toFalse
, the losses are instead summed for each minibatch. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on
size_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
- Shape:
- Input: (N, C) where N is the batch size and C is the number of classes.
- Target: (N, C), same shape as the input.
- Output: scalar. If reduce is False, then (N).
CosineEmbeddingLoss¶
-
class
torch.nn.
CosineEmbeddingLoss
(margin=0, size_average=True, reduce=True)[source]¶ Creates a criterion that measures the loss given input tensors x_1, x_2 and a Tensor label y with values 1 or -1. This is used for measuring whether two inputs are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for each sample is:
\begin{split}\text{loss}(x, y) = \begin{cases} 1 - \cos(x_1, x_2), & \text{if } y == 1 \\ \max(0, \cos(x_1, x_2) - \text{margin}), & \text{if } y == -1 \end{cases}\end{split}Parameters: - margin (float, optional) – Should be a number from -1 to 1, 0 to 0.5 is suggested. If margin is missing, the default value is 0.
- size_average (bool, optional) – By default, the losses are averaged over
observations for each minibatch. However, if the field
size_average
is set toFalse
, the losses are instead summed for each minibatch. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on
size_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
MultiMarginLoss¶
-
class
torch.nn.
MultiMarginLoss
(p=1, margin=1, weight=None, size_average=True, reduce=True)[source]¶ Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input x (a 2D mini-batch Tensor) and output y (which is a 1D tensor of target class indices, 0 \leq y \leq \text{x.size}(1)):
For each mini-batch sample, the loss in terms of the 1D input x and scalar output y is:
\text{loss}(x, y) = \frac{\sum_i \max(0, \text{margin} - x[y] + x[i]))^p}{\text{x.size}(0)}where i == 0 to x.size(0) and i \neq y.
Optionally, you can give non-equal weighting on the classes by passing a 1D weight tensor into the constructor.
The loss function then becomes:
\text{loss}(x, y) = \frac{\sum_i \max(0, w[y] * (\text{margin} - x[y] - x[i]))^p)}{\text{x.size}(0)}Parameters: - p (int, optional) – Has a default value of 1. 1 and 2 are the only supported values
- margin (float, optional) – Has a default value of 1.
- weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
- size_average (bool, optional) – By default, the losses are averaged over
observations for each minibatch. However, if the field
size_average
is set toFalse
, the losses are instead summed for each minibatch. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on
size_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
TripletMarginLoss¶
-
class
torch.nn.
TripletMarginLoss
(margin=1.0, p=2, eps=1e-06, swap=False, size_average=True, reduce=True)[source]¶ Creates a criterion that measures the triplet loss given an input tensors x1, x2, x3 and a margin with a value greater than 0. This is used for measuring a relative similarity between samples. A triplet is composed by a, p and n: anchor, positive examples and negative example respectively. The shapes of all input tensors should be (N, D).
The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al.
The loss function for each sample in the mini-batch is:
L(a, p, n) = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\}where d(x_i, y_i) = \left\lVert {\bf x}_i - {\bf y}_i \right\rVert_p.
Parameters: - margin (float, optional) – Default: 1.
- p (int, optional) – The norm degree for pairwise distance. Default: 2.
- swap (float, optional) – The distance swap is described in detail in the paper
Learning shallow convolutional feature descriptors with triplet losses by
V. Balntas, E. Riba et al. Default:
False
. - size_average (bool, optional) – By default, the losses are averaged over
observations for each minibatch. However, if the field
size_average
is set toFalse
, the losses are instead summed for each minibatch. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on
size_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
- Shape:
- Input: (N, D) where D is the vector dimension.
- Output: scalar. If reduce is False, then (N).
>>> triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2) >>> input1 = torch.randn(100, 128, requires_grad=True) >>> input2 = torch.randn(100, 128, requires_grad=True) >>> input3 = torch.randn(100, 128, requires_grad=True) >>> output = triplet_loss(input1, input2, input3) >>> output.backward()
Vision layers¶
PixelShuffle¶
-
class
torch.nn.
PixelShuffle
(upscale_factor)[source]¶ Rearranges elements in a Tensor of shape (*, r^2C, H, W) to a tensor of shape (C, rH, rW).
This is useful for implementing efficient sub-pixel convolution with a stride of 1/r.
Look at the paper: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network by Shi et. al (2016) for more details
Parameters: upscale_factor (int) – factor to increase spatial resolution by - Shape:
- Input: (N, C * \text{upscale_factor}^2, H, W)
- Output: (N, C, H * \text{upscale_factor}, W * \text{upscale_factor})
Examples:
>>> ps = nn.PixelShuffle(3) >>> input = torch.tensor(1, 9, 4, 4) >>> output = ps(input) >>> print(output.size()) torch.Size([1, 1, 12, 12])
Upsample¶
-
class
torch.nn.
Upsample
(size=None, scale_factor=None, mode='nearest', align_corners=None)[source]¶ Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.
The input data is assumed to be of the form minibatch x channels x [optional depth] x [optional height] x width. Hence, for spatial inputs, we expect a 4D Tensor and for volumetric inputs, we expect a 5D Tensor.
The algorithms available for upsampling are nearest neighbor and linear, bilinear and trilinear for 3D, 4D and 5D input Tensor, respectively.
One can either give a
scale_factor
or the target outputsize
to calculate the output size. (You cannot give both, as it is ambiguous)Parameters: - size (tuple, optional) – a tuple of ints ([optional D_out], [optional H_out], W_out) output sizes
- scale_factor (int / tuple of python:ints, optional) – the multiplier for the image height / width / depth
- mode (string, optional) – the upsampling algorithm: one of nearest, linear, bilinear and trilinear. Default: nearest
- align_corners (bool, optional) – if True, the corner pixels of the input
and output tensors are aligned, and thus preserving the values at
those pixels. This only has effect when
mode
is linear, bilinear, or trilinear. Default: False
- Shape:
Input: (N, C, W_{in}), (N, C, H_{in}, W_{in}) or (N, C, D_{in}, H_{in}, W_{in})
Output: (N, C, W_{out}), (N, C, H_{out}, W_{out}) or (N, C, D_{out}, H_{out}, W_{out}), where
\begin{align}\begin{aligned}D_{out} = \left\lfloor D_{in} \times \text{scale_factor} \right\rfloor \text{ or size}[-3]\\H_{out} = \left\lfloor H_{in} \times \text{scale_factor} \right\rfloor \text{ or size}[-2]\\W_{out} = \left\lfloor W_{in} \times \text{scale_factor} \right\rfloor \text{ or size}[-1]\end{aligned}\end{align}
Warning
With
align_corners = True
, the linearly interpolating modes (linear, bilinear, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior isalign_corners = False
. See below for concrete examples on how this affects the outputs.Examples:
>>> input = torch.arange(1, 5).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='nearest') >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> m(input) tensor([[[[ 1.0000, 1.2500, 1.7500, 2.0000], [ 1.5000, 1.7500, 2.2500, 2.5000], [ 2.5000, 2.7500, 3.2500, 3.5000], [ 3.0000, 3.2500, 3.7500, 4.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]]) >>> # Try scaling the same data in a larger tensor >>> >>> input_3x3 = torch.zeros(3, 3).view(1, 1, 3, 3) >>> input_3x3[:, :, :2, :2].copy_(input) tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> input_3x3 tensor([[[[ 1., 2., 0.], [ 3., 4., 0.], [ 0., 0., 0.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> # Notice that values in top left corner are the same with the small input (except at boundary) >>> m(input_3x3) tensor([[[[ 1.0000, 1.2500, 1.7500, 1.5000, 0.5000, 0.0000], [ 1.5000, 1.7500, 2.2500, 1.8750, 0.6250, 0.0000], [ 2.5000, 2.7500, 3.2500, 2.6250, 0.8750, 0.0000], [ 2.2500, 2.4375, 2.8125, 2.2500, 0.7500, 0.0000], [ 0.7500, 0.8125, 0.9375, 0.7500, 0.2500, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> # Notice that values in top left corner are now changed >>> m(input_3x3) tensor([[[[ 1.0000, 1.4000, 1.8000, 1.6000, 0.8000, 0.0000], [ 1.8000, 2.2000, 2.6000, 2.2400, 1.1200, 0.0000], [ 2.6000, 3.0000, 3.4000, 2.8800, 1.4400, 0.0000], [ 2.4000, 2.7200, 3.0400, 2.5600, 1.2800, 0.0000], [ 1.2000, 1.3600, 1.5200, 1.2800, 0.6400, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])
UpsamplingNearest2d¶
-
class
torch.nn.
UpsamplingNearest2d
(size=None, scale_factor=None)[source]¶ Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the
size
or thescale_factor
as it’s constructor argument.When size is given, it is the output size of the image (h, w).
Parameters: Warning
This class is deprecated in favor of
Upsample
.- Shape:
Input: (N, C, H_{in}, W_{in})
Output: (N, C, H_{out}, W_{out}) where
\begin{align}\begin{aligned}H_{out} = \left\lfloor H_{in} \times \text{scale_factor} \right\rfloor\\W_{out} = \left\lfloor W_{in} \times \text{scale_factor} \right\rfloor\end{aligned}\end{align}
Examples:
>>> input = torch.arange(1, 5).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingNearest2d(scale_factor=2) >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]])
UpsamplingBilinear2d¶
-
class
torch.nn.
UpsamplingBilinear2d
(size=None, scale_factor=None)[source]¶ Applies a 2D bilinear upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the
size
or thescale_factor
as it’s constructor argument.When size is given, it is the output size of the image (h, w).
Parameters: Warning
This class is deprecated in favor of
Upsample
. It is equivalent tonn.Upsample(..., mode='bilinear', align_corners=True)
.- Shape:
Input: (N, C, H_{in}, W_{in})
Output: (N, C, H_{out}, W_{out}) where
\begin{align}\begin{aligned}H_{out} = \left\lfloor H_{in} \times \text{scale_factor} \right\rfloor\\W_{out} = \left\lfloor W_{in} \times \text{scale_factor} \right\rfloor\end{aligned}\end{align}
Examples:
>>> input = torch.arange(1, 5).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingBilinear2d(scale_factor=2) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]])
DataParallel layers (multi-GPU, distributed)¶
DataParallel¶
-
class
torch.nn.
DataParallel
(module, device_ids=None, output_device=None, dim=0)[source]¶ Implements data parallelism at the module level.
This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module.
The batch size should be larger than the number of GPUs used.
See also: Use nn.DataParallel instead of multiprocessing
Arbitrary positional and keyword inputs are allowed to be passed into DataParallel EXCEPT Tensors. All tensors will be scattered on dim specified (default 0). Primitive types will be broadcasted, but all other types will be a shallow copy and can be corrupted if written to in the model’s forward pass.
Warning
Forward and backward hooks defined on
module
and its submodules will be invokedlen(device_ids)
times, each with inputs located on a particular device. Particularly, the hooks are only guaranteed to be executed in correct order with respect to operations on corresponding devices. For example, it is not guaranteed that hooks set viaregister_forward_pre_hook()
be executed before alllen(device_ids)
forward()
calls, but that each such hook be executed before the correspondingforward()
call of that device.Note
There is a subtlety in using the
pack sequence -> recurrent network -> unpack sequence
pattern in aModule
wrapped inDataParallel
. See My recurrent network doesn’t work with data parallelism section in FAQ for details.Parameters: - module – module to be parallelized
- device_ids – CUDA devices (default: all devices)
- output_device – device location of output (default: device_ids[0])
Example:
>>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2]) >>> output = net(input_var)
DistributedDataParallel¶
-
class
torch.nn.parallel.
DistributedDataParallel
(module, device_ids=None, output_device=None, dim=0, broadcast_buffers=True)[source]¶ Implements distributed data parallelism at the module level.
This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. The module is replicated on each machine and each device, and each such replica handles a portion of the input. During the backwards pass, gradients from each node are averaged.
The batch size should be larger than the number of GPUs used locally. It should also be an integer multiple of the number of GPUs so that each chunk is the same size (so that each GPU processes the same number of samples).
See also: Basics and Use nn.DataParallel instead of multiprocessing. The same constraints on input as in
torch.nn.DataParallel
apply.Creation of this class requires the distributed package to be already initialized in the process group mode (see
torch.distributed.init_process_group()
).Warning
This module works only with the
nccl
andgloo
backends.Warning
Constructor, forward method, and differentiation of the output (or a function of the output of this module) is a distributed synchronization point. Take that into account in case different processes might be executing different code.
Warning
This module assumes all parameters are registered in the model by the time it is created. No parameters should be added nor removed later. Same applies to buffers.
Warning
This module assumes all buffers and gradients are dense.
Warning
This module doesn’t work with
torch.autograd.grad()
(i.e. it will only work if gradients are to be accumulated in.grad
attributes of parameters).Warning
If you plan on using this module with a
nccl
backend or agloo
backend (that uses Infiniband), together with a DataLoader that uses multiple workers, please change the multiprocessing start method toforkserver
(Python 3 only) orspawn
. Unfortunately Gloo (that uses Infiniband) and NCCL2 are not fork safe, and you will likely experience deadlocks if you don’t change this setting.Note
Parameters are never broadcast between processes. The module performs an all-reduce step on gradients and assumes that they will be modified by the optimizer in all processes in the same way. Buffers (e.g. BatchNorm stats) are broadcast from the module in process of rank 0, to all other replicas in the system in every iteration.
Warning
Forward and backward hooks defined on
module
and its submodules won’t be invoked anymore, unless the hooks are initialized in theforward()
method.Parameters: - module – module to be parallelized
- device_ids – CUDA devices (default: all devices)
- output_device – device location of output (default: device_ids[0])
- broadcast_buffers – flag that enables syncing (broadcasting) buffers of the module at beginning of the forward function. (default: True)
Example:
>>> torch.distributed.init_process_group(world_size=4, init_method='...') >>> net = torch.nn.DistributedDataParallel(model)
Utilities¶
clip_grad_norm_¶
-
torch.nn.utils.
clip_grad_norm_
(parameters, max_norm, norm_type=2)[source]¶ Clips gradient norm of an iterable of parameters.
The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place.
Parameters: Returns: Total norm of the parameters (viewed as a single vector).
clip_grad_value_¶
weight_norm¶
-
torch.nn.utils.
weight_norm
(module, name='weight', dim=0)[source]¶ Applies weight normalization to a parameter in the given module.
\mathbf{w} = g \dfrac{\mathbf{v}}{\|\mathbf{v}\|}Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This replaces the parameter specified by name (e.g. “weight”) with two parameters: one specifying the magnitude (e.g. “weight_g”) and one specifying the direction (e.g. “weight_v”). Weight normalization is implemented via a hook that recomputes the weight tensor from the magnitude and direction before every
forward()
call.By default, with dim=0, the norm is computed independently per output channel/plane. To compute a norm over the entire weight tensor, use dim=None.
See https://arxiv.org/abs/1602.07868
Parameters: Returns: The original module with the weight norm hook
Example:
>>> m = weight_norm(nn.Linear(20, 40), name='weight') Linear (20 -> 40) >>> m.weight_g.size() torch.Size([40, 1]) >>> m.weight_v.size() torch.Size([40, 20])
remove_weight_norm¶
PackedSequence¶
-
torch.nn.utils.rnn.
PackedSequence
(cls, *args)[source]¶ Holds the data and list of
batch_sizes
of a packed sequence.All RNN modules accept packed sequences as inputs.
Note
Instances of this class should never be created manually. They are meant to be instantiated by functions like
pack_padded_sequence()
.Batch sizes represent the number elements at each sequence step in the batch, not the varying sequence lengths passed to
pack_padded_sequence()
. For instance, given dataabc
and x thePackedSequence
would contain dataaxbc
withbatch_sizes=[2,1,1]
.Variables:
pack_padded_sequence¶
-
torch.nn.utils.rnn.
pack_padded_sequence
(input, lengths, batch_first=False)[source]¶ Packs a Tensor containing padded sequences of variable length.
Input can be of size
T x B x *
where T is the length of the longest sequence (equal tolengths[0]
), B is the batch size, and * is any number of dimensions (including 0). Ifbatch_first
is TrueB x T x *
inputs are expected.The sequences should be sorted by length in a decreasing order, i.e.
input[:,0]
should be the longest sequence, andinput[:,B-1]
the shortest one.Note
This function accepts any input that has at least two dimensions. You can apply it to pack the labels, and use the output of the RNN with them to compute the loss directly. A Tensor can be retrieved from a
PackedSequence
object by accessing its.data
attribute.Parameters: Returns: a
PackedSequence
object
pad_packed_sequence¶
-
torch.nn.utils.rnn.
pad_packed_sequence
(sequence, batch_first=False, padding_value=0.0, total_length=None)[source]¶ Pads a packed batch of variable length sequences.
It is an inverse operation to
pack_padded_sequence()
.The returned Tensor’s data will be of size
T x B x *
, where T is the length of the longest sequence and B is the batch size. Ifbatch_first
is True, the data will be transposed intoB x T x *
format.Batch elements will be ordered decreasingly by their length.
Note
total_length
is useful to implement thepack sequence -> recurrent network -> unpack sequence
pattern in aModule
wrapped inDataParallel
. See this FAQ section for details.Parameters: - sequence (PackedSequence) – batch to pad
- batch_first (bool, optional) – if
True
, the output will be inB x T x *
format. - padding_value (float, optional) – values for padded elements.
- total_length (int, optional) – if not
None
, the output will be padded to have lengthtotal_length
. This method will throwValueError
iftotal_length
is less than the max sequence length insequence
.
Returns: Tuple of Tensor containing the padded sequence, and a Tensor containing the list of lengths of each sequence in the batch.
pad_sequence¶
-
torch.nn.utils.rnn.
pad_sequence
(sequences, batch_first=False, padding_value=0)[source]¶ Pad a list of variable length Tensors with zero
pad_sequence
stacks a list of Tensors along a new dimension, and padds them to equal length. For example, if the input is list of sequences with sizeL x *
and if batch_first is False, andT x B x *
otherwise. The list of sequences should be sorted in the order of decreasing length.B is batch size. It’s equal to the number of elements in
sequences
. T is length of the longest sequence. L is length of the sequence. * is any number of trailing dimensions, including none.Example
>>> from torch.nn.utils.rnn import pad_sequence >>> a = torch.ones(25, 300) >>> b = torch.ones(22, 300) >>> c = torch.ones(15, 300) >>> pad_sequence([a, b, c]).size() torch.Size([25, 3, 300])
Note
- This function returns a Tensor of size
T x B x *
orB x T x *
where T is the - length of longest sequence.
- Function assumes trailing dimensions and type of all the Tensors
- in sequences are same.
Parameters: Returns: Tensor of size
T x B x *
if batch_first is False Tensor of sizeB x T x *
otherwise- This function returns a Tensor of size
pack_sequence¶
-
torch.nn.utils.rnn.
pack_sequence
(sequences)[source]¶ Packs a list of variable length Tensors
sequences
should be a list of Tensors of sizeL x *
, where L is the length of a sequence and * is any number of trailing dimensions, including zero. They should be sorted in the order of decreasing length.Example
>>> from torch.nn.utils.rnn import pack_sequence >>> a = torch.tensor([1,2,3]) >>> b = torch.tensor([4,5]) >>> c = torch.tensor([6]) >>> pack_sequence([a, b, c]]) PackedSequence(data=tensor([ 1, 4, 6, 2, 5, 3]), batch_sizes=tensor([ 3, 2, 1]))
Parameters: sequences (list[Tensor]) – A list of sequences of decreasing length. Returns: a PackedSequence
object
torch.nn.functional¶
Convolution functions¶
conv1d¶
-
torch.nn.functional.
conv1d
(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor¶ Applies a 1D convolution over an input signal composed of several input planes.
See
Conv1d
for details and output shape.Parameters: - input – input tensor of shape minibatch \times in\_channels \times iW
- weight – filters of shape out\_channels \times \frac{in\_channels}{groups} \times kW
- bias – optional bias of shape (out\_channels). Default:
None
- stride – the stride of the convolving kernel. Can be a single number or a one-element tuple (sW,). Default: 1
- padding – implicit zero paddings on both sides of the input. Can be a single number or a one-element tuple (padW,). Default: 0
- dilation – the spacing between kernel elements. Can be a single number or a one-element tuple (dW,). Default: 1
- groups – split input into groups, in\_channels should be divisible by the number of groups. Default: 1
Examples:
>>> filters = torch.randn(33, 16, 3) >>> inputs = torch.randn(20, 16, 50) >>> F.conv1d(inputs, filters)
conv2d¶
-
torch.nn.functional.
conv2d
(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor¶ Applies a 2D convolution over an input image composed of several input planes.
See
Conv2d
for details and output shape.Parameters: - input – input tensor of shape (minibatch \times in\_channels \times iH \times iW)
- weight – filters of shape (out\_channels \times \frac{in\_channels}{groups} \times kH \times kW)
- bias – optional bias tensor of shape (out\_channels). Default:
None
- stride – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1
- padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
- dilation – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1
- groups – split input into groups, in\_channels should be divisible by the number of groups. Default: 1
Examples:
>>> # With square kernels and equal stride >>> filters = torch.randn(8,4,3,3) >>> inputs = torch.randn(1,4,5,5) >>> F.conv2d(inputs, filters, padding=1)
conv3d¶
-
torch.nn.functional.
conv3d
(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor¶ Applies a 3D convolution over an input image composed of several input planes.
See
Conv3d
for details and output shape.Parameters: - input – input tensor of shape (minibatch \times in\_channels \times iT \times iH \times iW)
- weight – filters of shape (out\_channels \times \frac{in\_channels}{groups} \times kT \times kH \times kW)
- bias – optional bias tensor of shape (out\_channels). Default: None
- stride – the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1
- padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW). Default: 0
- dilation – the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1
- groups – split input into groups, in\_channels should be divisible by the number of groups. Default: 1
Examples:
>>> filters = torch.randn(33, 16, 3, 3, 3) >>> inputs = torch.randn(20, 16, 50, 10, 20) >>> F.conv3d(inputs, filters)
conv_transpose1d¶
-
torch.nn.functional.
conv_transpose1d
(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor¶ Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called “deconvolution”.
See
ConvTranspose1d
for details and output shape.Parameters: - input – input tensor of shape (minibatch \times in\_channels \times iW)
- weight – filters of shape (in\_channels \times \frac{out\_channels}{groups} \times kW)
- bias – optional bias of shape (out\_channels). Default: None
- stride – the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1
- padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0
- output_padding – implicit zero-paddings of 0 \leq padding < stride on both sides of the output. Can be a single number or a tuple (out_padW,). Default: 0
- groups – split input into groups, in\_channels should be divisible by the number of groups. Default: 1
- dilation – the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1
Examples:
>>> inputs = torch.randn(20, 16, 50) >>> weights = torch.randn(16, 33, 5) >>> F.conv_transpose1d(inputs, weights)
conv_transpose2d¶
-
torch.nn.functional.
conv_transpose2d
(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor¶ Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution”.
See
ConvTranspose2d
for details and output shape.Parameters: - input – input tensor of shape (minibatch \times in\_channels \times iH \times iW)
- weight – filters of shape (in\_channels \times \frac{out\_channels}{groups} \times kH \times kW)
- bias – optional bias of shape (out\_channels). Default: None
- stride – the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1
- padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
- output_padding – implicit zero-paddings of 0 \leq padding < stride on both sides of the output. Can be a single number or a tuple (out_padH, out_padW). Default: 0
- groups – split input into groups, in\_channels should be divisible by the number of groups. Default: 1
- dilation – the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1
Examples:
>>> # With square kernels and equal stride >>> inputs = torch.randn(1, 4, 5, 5) >>> weights = torch.randn(4, 8, 3, 3) >>> F.conv_transpose2d(inputs, weights, padding=1)
conv_transpose3d¶
-
torch.nn.functional.
conv_transpose3d
(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor¶ Applies a 3D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution”
See
ConvTranspose3d
for details and output shape.Parameters: - input – input tensor of shape (minibatch \times in\_channels \times iT \times iH \times iW)
- weight – filters of shape (in\_channels \times \frac{out\_channels}{groups} \times kT \times kH \times kW)
- bias – optional bias of shape (out\_channels). Default: None
- stride – the stride of the convolving kernel. Can be a single number or a tuple (sT, sH, sW). Default: 1
- padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW). Default: 0
- output_padding – implicit zero-paddings of 0 leq padding < stride on both sides of the output. Can be a single number or a tuple (out_padT, out_padH, out_padW). Default: 0
- groups – split input into groups, in\_channels should be divisible by the number of groups. Default: 1
- dilation – the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1
Examples:
>>> inputs = torch.randn(20, 16, 50, 10, 20) >>> weights = torch.randn(16, 33, 3, 3, 3) >>> F.conv_transpose3d(inputs, weights)
Pooling functions¶
avg_pool1d¶
-
torch.nn.functional.
avg_pool1d
(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[source]¶ Applies a 1D average pooling over an input signal composed of several input planes.
See
AvgPool1d
for details and output shape.Parameters: - input – input tensor of shape (minibatch \times in\_channels \times iW)
- kernel_size – the size of the window. Can be a single number or a tuple (kW,)
- stride – the stride of the window. Can be a single number or a tuple
(sW,). Default:
kernel_size
- padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padW,). Default: 0
- ceil_mode – when True, will use ceil instead of floor to compute the
output shape. Default:
False
- count_include_pad – when True, will include the zero-padding in the
averaging calculation. Default:
True
- Example::
>>> # pool of square window of size=3, stride=2 >>> input = torch.tensor([[[1,2,3,4,5,6,7]]]) >>> F.avg_pool1d(input, kernel_size=3, stride=2) tensor([[[ 2., 4., 6.]]])
avg_pool2d¶
-
torch.nn.functional.
avg_pool2d
(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=False) → Tensor¶ Applies 2D average-pooling operation in kH \times kW regions by step size sH \times sW steps. The number of output features is equal to the number of input planes.
See
AvgPool2d
for details and output shape.Parameters: - input – input tensor (minibatch \times in\_channels \times iH \times iW)
- kernel_size – size of the pooling region. Can be a single number or a tuple (kH \times kW)
- stride – stride of the pooling operation. Can be a single number or a
tuple (sH, sW). Default:
kernel_size
- padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
- ceil_mode – when True, will use ceil instead of floor in the formula
to compute the output shape. Default:
False
- count_include_pad – when True, will include the zero-padding in the
averaging calculation. Default:
False
Warning
Default value for
count_include_pad
wasTrue
in versions before 0.3, and will be changed back toTrue
from 0.4.1 and forward.
avg_pool3d¶
-
torch.nn.functional.
avg_pool3d
(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=False) → Tensor¶ Applies 3D average-pooling operation in kT \times kH \times kW regions by step size sT \times sH \times sW steps. The number of output features is equal to \lfloor\frac{\text{input planes}}{sT}\rfloor.
See
AvgPool3d
for details and output shape.Parameters: - input – input tensor (minibatch \times in\_channels \times iT \times iH \times iW)
- kernel_size – size of the pooling region. Can be a single number or a tuple (kT \times kH \times kW)
- stride – stride of the pooling operation. Can be a single number or a
tuple (sT, sH, sW). Default:
kernel_size
- padding – implicit zero paddings on both sides of the input. Can be a single number or a tuple (padT, padH, padW), Default: 0
- ceil_mode – when True, will use ceil instead of floor in the formula to compute the output shape
- count_include_pad – when True, will include the zero-padding in the
averaging calculation. Default:
False
Warning
Default value for
count_include_pad
wasTrue
in versions before 0.3, and will be changed back toTrue
from 0.4.1 and forward.
max_pool1d¶
max_pool2d¶
max_pool3d¶
max_unpool1d¶
-
torch.nn.functional.
max_unpool1d
(input, indices, kernel_size, stride=None, padding=0, output_size=None)[source]¶ Computes a partial inverse of
MaxPool1d
.See
MaxUnpool1d
for details.
max_unpool2d¶
-
torch.nn.functional.
max_unpool2d
(input, indices, kernel_size, stride=None, padding=0, output_size=None)[source]¶ Computes a partial inverse of
MaxPool2d
.See
MaxUnpool2d
for details.
max_unpool3d¶
-
torch.nn.functional.
max_unpool3d
(input, indices, kernel_size, stride=None, padding=0, output_size=None)[source]¶ Computes a partial inverse of
MaxPool3d
.See
MaxUnpool3d
for details.
lp_pool1d¶
lp_pool2d¶
adaptive_max_pool1d¶
-
torch.nn.functional.
adaptive_max_pool1d
(input, output_size, return_indices=False)[source]¶ Applies a 1D adaptive max pooling over an input signal composed of several input planes.
See
AdaptiveMaxPool1d
for details and output shape.Parameters: - output_size – the target output size (single integer)
- return_indices – whether to return pooling indices. Default:
False
adaptive_max_pool2d¶
-
torch.nn.functional.
adaptive_max_pool2d
(input, output_size, return_indices=False)[source]¶ Applies a 2D adaptive max pooling over an input signal composed of several input planes.
See
AdaptiveMaxPool2d
for details and output shape.Parameters: - output_size – the target output size (single integer or double-integer tuple)
- return_indices – whether to return pooling indices. Default:
False
adaptive_max_pool3d¶
-
torch.nn.functional.
adaptive_max_pool3d
(input, output_size, return_indices=False)[source]¶ Applies a 3D adaptive max pooling over an input signal composed of several input planes.
See
AdaptiveMaxPool3d
for details and output shape.Parameters: - output_size – the target output size (single integer or triple-integer tuple)
- return_indices – whether to return pooling indices. Default:
False
adaptive_avg_pool1d¶
-
torch.nn.functional.
adaptive_avg_pool1d
(input, output_size) → Tensor¶ Applies a 1D adaptive average pooling over an input signal composed of several input planes.
See
AdaptiveAvgPool1d
for details and output shape.Parameters: output_size – the target output size (single integer)
adaptive_avg_pool2d¶
-
torch.nn.functional.
adaptive_avg_pool2d
(input, output_size) → Tensor¶ Applies a 2D adaptive average pooling over an input signal composed of several input planes.
See
AdaptiveAvgPool2d
for details and output shape.Parameters: output_size – the target output size (single integer or double-integer tuple)
adaptive_avg_pool3d¶
-
torch.nn.functional.
adaptive_avg_pool3d
(input, output_size) → Tensor¶ Applies a 3D adaptive average pooling over an input signal composed of several input planes.
See
AdaptiveAvgPool3d
for details and output shape.Parameters: output_size – the target output size (single integer or triple-integer tuple)
Non-linear activation functions¶
threshold¶
-
torch.nn.functional.
threshold
(input, threshold, value, inplace=False)[source]¶ Thresholds each element of the input Tensor.
See
Threshold
for more details.
-
torch.nn.functional.
threshold_
(input, threshold, value) → Tensor¶ In-place version of
threshold()
.
relu¶
hardtanh¶
-
torch.nn.functional.
hardtanh
(input, min_val=-1., max_val=1., inplace=False) → Tensor[source]¶ Applies the HardTanh function element-wise. See
Hardtanh
for more details.
-
torch.nn.functional.
hardtanh_
(input, min_val=-1., max_val=1.) → Tensor¶ In-place version of
hardtanh()
.
relu6¶
elu¶
selu¶
leaky_relu¶
-
torch.nn.functional.
leaky_relu
(input, negative_slope=0.01, inplace=False) → Tensor[source]¶ Applies element-wise, \text{LeakyReLU}(x) = \max(0, x) + \text{negative_slope} * \min(0, x)
See
LeakyReLU
for more details.
-
torch.nn.functional.
leaky_relu_
(input, negative_slope=0.01) → Tensor¶ In-place version of
leaky_relu()
.
prelu¶
rrelu¶
glu¶
-
torch.nn.functional.
glu
(input, dim=-1) → Tensor[source]¶ The gated linear unit. Computes:
H = A \times \sigma(B)where input is split in half along dim to form A and B.
See Language Modeling with Gated Convolutional Networks.
Parameters:
logsigmoid¶
-
torch.nn.functional.
logsigmoid
(input) → Tensor¶ Applies element-wise \text{LogSigmoid}(x) = \log \left(\frac{1}{1 + \exp(-x_i)}\right)
See
LogSigmoid
for more details.
hardshrink¶
-
torch.nn.functional.
hardshrink
(input, lambd=0.5) → Tensor¶ Applies the hard shrinkage function element-wise
See
Hardshrink
for more details.
tanhshrink¶
-
torch.nn.functional.
tanhshrink
(input) → Tensor[source]¶ Applies element-wise, \text{Tanhshrink}(x) = x - \text{Tanh}(x)
See
Tanhshrink
for more details.
softsign¶
softmin¶
softmax¶
-
torch.nn.functional.
softmax
(input, dim=None, _stacklevel=3)[source]¶ Applies a softmax function.
Softmax is defined as:
\text{Softmax}(x_{i}) = \frac{exp(x_i)}{\sum_j exp(x_j)}
It is applied to all slices along dim, and will re-scale them so that the elements lie in the range (0, 1) and sum to 1.
See
Softmax
for more details.Parameters: Note
This function doesn’t work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use log_softmax instead (it’s faster and has better numerical properties).
softshrink¶
-
torch.nn.functional.
softshrink
(input, lambd=0.5) → Tensor¶ Applies the soft shrinkage function elementwise
See
Softshrink
for more details.
log_softmax¶
-
torch.nn.functional.
log_softmax
(input, dim=None, _stacklevel=3)[source]¶ Applies a softmax followed by a logarithm.
While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly.
See
LogSoftmax
for more details.Parameters:
tanh¶
Normalization functions¶
batch_norm¶
-
torch.nn.functional.
batch_norm
(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05)[source]¶ Applies Batch Normalization for each channel across a batch of data.
See
BatchNorm1d
,BatchNorm2d
,BatchNorm3d
for details.
instance_norm¶
-
torch.nn.functional.
instance_norm
(input, running_mean=None, running_var=None, weight=None, bias=None, use_input_stats=True, momentum=0.1, eps=1e-05)[source]¶ Applies Instance Normalization for each channel in each data sample in a batch.
See
InstanceNorm1d
,InstanceNorm2d
,InstanceNorm3d
for details.
layer_norm¶
local_response_norm¶
-
torch.nn.functional.
local_response_norm
(input, size, alpha=0.0001, beta=0.75, k=1)[source]¶ Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. Applies normalization across channels.
See
LocalResponseNorm
for details.
normalize¶
-
torch.nn.functional.
normalize
(input, p=2, dim=1, eps=1e-12)[source]¶ Performs L_p normalization of inputs over specified dimension.
Does:
v = \frac{v}{\max(\lVert v \rVert_p, \epsilon)}for each subtensor v over dimension dim of input. Each subtensor is flattened into a vector, i.e. \lVert v \rVert_p is not a matrix norm.
With default arguments normalizes over the second dimension with Euclidean norm.
Parameters:
Linear functions¶
linear¶
-
torch.nn.functional.
linear
(input, weight, bias=None)[source]¶ Applies a linear transformation to the incoming data: y = xA^T + b.
- Shape:
- Input: (N, *, in\_features) where * means any number of additional dimensions
- Weight: (out\_features, in\_features)
- Bias: (out\_features)
- Output: (N, *, out\_features)
Dropout functions¶
alpha_dropout¶
-
torch.nn.functional.
alpha_dropout
(input, p=0.5, training=False)[source]¶ Applies alpha dropout to the input.
See
AlphaDropout
for details.Parameters:
Distance functions¶
pairwise_distance¶
-
torch.nn.functional.
pairwise_distance
(x1, x2, p=2, eps=1e-06, keepdim=False)[source]¶ See
torch.nn.PairwiseDistance
for details
cosine_similarity¶
-
torch.nn.functional.
cosine_similarity
(x1, x2, dim=1, eps=1e-08)[source]¶ Returns cosine similarity between x1 and x2, computed along dim.
\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}Parameters: - Shape:
- Input: (\ast_1, D, \ast_2) where D is at position dim.
- Output: (\ast_1, \ast_2) where 1 is at position dim.
Example:
>>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> output = F.cosine_similarity(input1, input2) >>> print(output)
Loss functions¶
binary_cross_entropy¶
-
torch.nn.functional.
binary_cross_entropy
(input, target, weight=None, size_average=True, reduce=True)[source]¶ Function that measures the Binary Cross Entropy between the target and the output.
See
BCELoss
for details.Parameters: - input – Tensor of arbitrary shape
- target – Tensor of the same shape as input
- weight (Tensor, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average
is set toFalse
, the losses are instead summed for each minibatch. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on
size_average
. Whenreduce
isFalse
, returns a loss per input/target element instead and ignoressize_average
. Default:True
Examples:
>>> input = torch.randn((3, 2), requires_grad=True) >>> target = torch.rand((3, 2), requires_grad=False) >>> loss = F.binary_cross_entropy(F.sigmoid(input), target) >>> loss.backward()
poisson_nll_loss¶
-
torch.nn.functional.
poisson_nll_loss
(input, target, log_input=True, full=False, size_average=True, eps=1e-08, reduce=True)[source]¶ Poisson negative log likelihood loss.
See
PoissonNLLLoss
for details.Parameters: - input – expectation of underlying Poisson distribution.
- target – random sample target \sim \text{Poisson}(input).
- log_input – if
True
the loss is computed as \exp(\text{input}) - \text{target} * \text{input}, ifFalse
then loss is \text{input} - \text{target} * \log(\text{input}+\text{eps}). Default:True
- full – whether to compute full loss, i. e. to add the Stirling
approximation term. Default:
False
\text{target} * \log(\text{target}) - \text{target} + 0.5 * \log(2 * \pi * \text{target}). - size_average – By default, the losses are averaged over observations for
each minibatch. However, if the field
size_average
is set toFalse
, the losses are instead summed for each minibatch. Default:True
- eps (float, optional) – Small value to avoid evaluation of \log(0) when
log_input`=``False`
. Default: 1e-8 - reduce (bool, optional) – By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average
. When reduce isFalse
, returns a loss per batch instead and ignoressize_average
. Default:True
cosine_embedding_loss¶
-
torch.nn.functional.
cosine_embedding_loss
(input1, input2, target, margin=0, size_average=True, reduce=True) → Tensor[source]¶ See
CosineEmbeddingLoss
for details.
cross_entropy¶
-
torch.nn.functional.
cross_entropy
(input, target, weight=None, size_average=True, ignore_index=-100, reduce=True)[source]¶ This criterion combines log_softmax and nll_loss in a single function.
See
CrossEntropyLoss
for details.Parameters: - input (Tensor) – (N, C) where C = number of classes or (N, C, H, W) in case of 2D Loss, or (N, C, d_1, d_2, ..., d_K) where K > 1 in the case of K-dimensional loss.
- target (Tensor) – (N) where each value is 0 \leq \text{targets}[i] \leq C-1, or (N, d_1, d_2, ..., d_K) where K \geq 1 for K-dimensional loss.
- weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored ifreduce
isFalse
. Default:True
- ignore_index (int, optional) – Specifies a target value that is ignored
and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets. Default: -100 - reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on
size_average
. Whenreduce
isFalse
, returns a loss per batch instead and ignoressize_average
. Default:True
Examples:
>>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randint(5, (3,), dtype=torch.int64) >>> loss = F.cross_entropy(input, target) >>> loss.backward()
hinge_embedding_loss¶
-
torch.nn.functional.
hinge_embedding_loss
(input, target, margin=1.0, size_average=True, reduce=True) → Tensor[source]¶ See
HingeEmbeddingLoss
for details.
kl_div¶
-
torch.nn.functional.
kl_div
(input, target, size_average=True) → Tensor¶ The Kullback-Leibler divergence Loss.
See
KLDivLoss
for details.Parameters: - input – Tensor of arbitrary shape
- target – Tensor of the same shape as input
- size_average – if
True
the output is divided by the number of elements in input tensor. Default:True
- reduce (bool, optional) – By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average. When reduce is
False
, returns a loss per input/target element instead and ignoressize_average
. Default:True
l1_loss¶
mse_loss¶
margin_ranking_loss¶
-
torch.nn.functional.
margin_ranking_loss
(input1, input2, target, margin=0, size_average=True, reduce=True) → Tensor[source]¶ See
MarginRankingLoss
for details.
multilabel_margin_loss¶
-
torch.nn.functional.
multilabel_margin_loss
(input, target, size_average=True, reduce=True) → Tensor¶ See
MultiLabelMarginLoss
for details.
multilabel_soft_margin_loss¶
-
torch.nn.functional.
multilabel_soft_margin_loss
(input, target, weight=None, size_average=True) → Tensor[source]¶ See
MultiLabelSoftMarginLoss
for details.
multi_margin_loss¶
-
torch.nn.functional.
multi_margin_loss
(input, target, p=1, margin=1, weight=None, size_average=True, reduce=True) → Tensor[source]¶ See
MultiMarginLoss
for details.
nll_loss¶
-
torch.nn.functional.
nll_loss
(input, target, weight=None, size_average=True, ignore_index=-100, reduce=True)[source]¶ The negative log likelihood loss.
See
NLLLoss
for details.Parameters: - input – (N, C) where C = number of classes or (N, C, H, W) in case of 2D Loss, or (N, C, d_1, d_2, ..., d_K) where K > 1 in the case of K-dimensional loss.
- target – (N) where each value is 0 \leq \text{targets}[i] \leq C-1, or (N, d_1, d_2, ..., d_K) where K \geq 1 for K-dimensional loss.
- weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. If
size_average
isFalse
, the losses are summed for each minibatch. Default:True
- ignore_index (int, optional) – Specifies a target value that is ignored
and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets. Default: -100
Example:
>>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = F.nll_loss(F.log_softmax(input), target) >>> output.backward()
binary_cross_entropy_with_logits¶
-
torch.nn.functional.
binary_cross_entropy_with_logits
(input, target, weight=None, size_average=True, reduce=True)[source]¶ Function that measures Binary Cross Entropy between target and output logits.
See
BCEWithLogitsLoss
for details.Parameters: - input – Tensor of arbitrary shape
- target – Tensor of the same shape as input
- weight (Tensor, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape
- size_average (bool, optional) – By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average
is set toFalse
, the losses are instead summed for each minibatch. Default:True
- reduce (bool, optional) – By default, the losses are averaged or summed over
observations for each minibatch depending on
size_average
. Whenreduce
isFalse
, returns a loss per input/target element instead and ignoressize_average
. Default:True
Examples:
>>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> loss = F.binary_cross_entropy_with_logits(input, target) >>> loss.backward()
smooth_l1_loss¶
-
torch.nn.functional.
smooth_l1_loss
(input, target, size_average=True, reduce=True) → Tensor¶ Function that uses a squared term if the absolute element-wise error falls below 1 and an L1 term otherwise.
See
SmoothL1Loss
for details.
soft_margin_loss¶
-
torch.nn.functional.
soft_margin_loss
(input, target, size_average=True, reduce=True) → Tensor¶ See
SoftMarginLoss
for details.
triplet_margin_loss¶
-
torch.nn.functional.
triplet_margin_loss
(anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, size_average=True, reduce=True)[source]¶ See
TripletMarginLoss
for details
Vision functions¶
pixel_shuffle¶
-
torch.nn.functional.
pixel_shuffle
(input, upscale_factor)[source]¶ Rearranges elements in a tensor of shape [*, C*r^2, H, W] to a tensor of shape [C, H*r, W*r].
See
PixelShuffle
for details.Parameters: Examples:
>>> ps = nn.PixelShuffle(3) >>> input = torch.empty(1, 9, 4, 4) >>> output = ps(input) >>> print(output.size()) torch.Size([1, 1, 12, 12])
pad¶
-
torch.nn.functional.
pad
(input, pad, mode='constant', value=0)[source]¶ Pads tensor.
- Nd constant padding: The number of dimensions to pad is
- \left\lfloor\frac{len(padding)}{2}\right\rfloor and the dimensions that get padded begins with the last dimension and moves forward. See below for examples.
- 1D, 2D and 3D “reflect” / “replicate” padding:
- for 1D:
- 3D input tensor with padding of the form (padLeft, padRight)
- for 2D:
- 4D input tensor with padding of the form (padLeft, padRight, padTop, padBottom).
- for 3D:
- 5D input tensor with padding of the form (padLeft, padRight, padTop, padBottom, padFront, padBack). No “reflect” implementation.
See
torch.nn.ConstantPad2d
,torch.nn.ReflectionPad2d
, andtorch.nn.ReplicationPad2d
for concrete examples on how each of the padding modes works.Parameters: Examples:
>>> t4d = torch.empty(3, 3, 4, 2) >>> p1d = (1, 1) # pad last dim by 1 on each side >>> out = F.pad(t4d, p1d, "constant", 0) # effectively zero padding >>> print(out.data.size()) torch.Size([3, 3, 4, 4]) >>> p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2) >>> out = F.pad(t4d, p2d, "constant", 0) >>> print(out.data.size()) torch.Size([3, 3, 8, 4]) >>> t4d = torch.empty(3, 3, 4, 2) >>> p3d = (0, 1, 2, 1, 3, 3) # pad by (0, 1), (2, 1), and (3, 3) >>> out = F.pad(t4d, p3d, "constant", 0) >>> print(out.data.size()) torch.Size([3, 9, 7, 3])
upsample¶
-
torch.nn.functional.
upsample
(input, size=None, scale_factor=None, mode='nearest', align_corners=None)[source]¶ Upsamples the input to either the given
size
or the givenscale_factor
The algorithm used for upsampling is determined by
mode
.Currently temporal, spatial and volumetric upsampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape.
The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width.
The modes available for upsampling are: nearest, linear (3D-only), bilinear (4D-only), trilinear (5D-only)
Parameters: - input (Tensor) – the input tensor
- size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]) – output spatial size.
- scale_factor (int) – multiplier for spatial size. Has to be an integer.
- mode (string) – algorithm used for upsampling: ‘nearest’ | ‘linear’ | ‘bilinear’ | ‘trilinear’. Default: ‘nearest’
- align_corners (bool, optional) – if True, the corner pixels of the input
and output tensors are aligned, and thus preserving the values at
those pixels. This only has effect when
mode
is linear, bilinear, or trilinear. Default: False
Warning
With
align_corners = True
, the linearly interpolating modes (linear, bilinear, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior isalign_corners = False
. SeeUpsample
for concrete examples on how this affects the outputs.
upsample_nearest¶
-
torch.nn.functional.
upsample_nearest
(input, size=None, scale_factor=None)[source]¶ Upsamples the input, using nearest neighbours’ pixel values.
Warning
This function is deprecated in favor of
torch.nn.functional.upsample()
. This is equivalent withnn.functional.upsample(..., mode='nearest')
.Currently spatial and volumetric upsampling are supported (i.e. expected inputs are 4 or 5 dimensional).
Parameters:
upsample_bilinear¶
-
torch.nn.functional.
upsample_bilinear
(input, size=None, scale_factor=None)[source]¶ Upsamples the input, using bilinear upsampling.
Warning
This function is deprecated in favor of
torch.nn.functional.upsample()
. This is equivalent withnn.functional.upsample(..., mode='bilinear', align_corners=True)
.Expected inputs are spatial (4 dimensional). Use upsample_trilinear fo volumetric (5 dimensional) inputs.
Parameters:
grid_sample¶
-
torch.nn.functional.
grid_sample
(input, grid, mode='bilinear', padding_mode='zeros')[source]¶ Given an
input
and a flow-fieldgrid
, computes the output using input pixel locations from the grid.Uses bilinear interpolation to sample the input pixels. Currently, only spatial (4 dimensional) and volumetric (5 dimensional) inputs are supported.
For each output location,
grid
has x, y input pixel locations which are used to compute output. In the case of 5D inputs,grid
has x, y, z pixel locations.Note
To avoid confusion in notation, let’s note that x corresponds to the width dimension IW, y corresponds to the height dimension IH and z corresponds to the depth dimension ID.
grid
has values in the range of [-1, 1]. This is because the pixel locations are normalized by the input height and width.For example, values: x: -1, y: -1 is the left-top pixel of the input, and values: x: 1, y: 1 is the right-bottom pixel of the input.
If
grid
has values outside the range of [-1, 1], those locations are handled as defined by padding_mode. Options are zeros or border, defining those locations to use 0 or image border values as contribution to the bilinear interpolation.Note
This function is used in building Spatial Transformer Networks
Parameters: Returns: output Tensor
Return type: output (Tensor)
affine_grid¶
-
torch.nn.functional.
affine_grid
(theta, size)[source]¶ Generates a 2d flow field, given a batch of affine matrices
theta
Generally used in conjunction withgrid_sample()
to implement Spatial Transformer Networks.Parameters: - theta (Tensor) – input batch of affine matrices (N \times 2 \times 3)
- size (torch.Size) – the target output image size (N \times C \times H \times W) Example: torch.Size((32, 3, 24, 24))
Returns: output Tensor of size (N \times H \times W \times 2)
Return type: output (Tensor)
DataParallel functions (multi-GPU, distributed)¶
data_parallel¶
-
torch.nn.parallel.
data_parallel
(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None)[source]¶ Evaluates module(input) in parallel across the GPUs given in device_ids.
This is the functional version of the DataParallel module.
Parameters: - module – the module to evaluate in parallel
- inputs – inputs to the module
- device_ids – GPU ids on which to replicate module
- output_device – GPU location of the output Use -1 to indicate the CPU. (default: device_ids[0])
Returns: a Tensor containing the result of module(input) located on output_device
torch.nn.init¶
-
torch.nn.init.
calculate_gain
(nonlinearity, param=None)[source]¶ Return the recommended gain value for the given nonlinearity function. The values are as follows:
nonlinearity gain Linear / Identity 1 Conv{1,2,3}D 1 Sigmoid 1 Tanh \frac{5}{3} ReLU \sqrt{2} Leaky Relu \sqrt{\frac{2}{1 + \text{negative_slope}^2}} Parameters: - nonlinearity – the non-linear function (nn.functional name)
- param – optional parameter for the non-linear function
Examples
>>> gain = nn.init.calculate_gain('leaky_relu')
-
torch.nn.init.
uniform_
(tensor, a=0, b=1)[source]¶ Fills the input Tensor with values drawn from the uniform distribution \mathcal{U}(a, b).
Parameters: - tensor – an n-dimensional torch.Tensor
- a – the lower bound of the uniform distribution
- b – the upper bound of the uniform distribution
Examples
>>> w = torch.empty(3, 5) >>> nn.init.uniform_(w)
-
torch.nn.init.
normal_
(tensor, mean=0, std=1)[source]¶ Fills the input Tensor with values drawn from the normal distribution \mathcal{N}(\text{mean}, \text{std}).
Parameters: - tensor – an n-dimensional torch.Tensor
- mean – the mean of the normal distribution
- std – the standard deviation of the normal distribution
Examples
>>> w = torch.empty(3, 5) >>> nn.init.normal_(w)
-
torch.nn.init.
constant_
(tensor, val)[source]¶ Fills the input Tensor with the value \text{val}.
Parameters: - tensor – an n-dimensional torch.Tensor
- val – the value to fill the tensor with
Examples
>>> w = torch.empty(3, 5) >>> nn.init.constant_(w, 0.3)
-
torch.nn.init.
eye_
(tensor)[source]¶ Fills the 2-dimensional input Tensor with the identity matrix. Preserves the identity of the inputs in Linear layers, where as many inputs are preserved as possible.
Parameters: tensor – a 2-dimensional torch.Tensor Examples
>>> w = torch.empty(3, 5) >>> nn.init.eye_(w)
-
torch.nn.init.
dirac_
(tensor)[source]¶ Fills the {3, 4, 5}-dimensional input Tensor with the Dirac delta function. Preserves the identity of the inputs in Convolutional layers, where as many input channels are preserved as possible.
Parameters: tensor – a {3, 4, 5}-dimensional torch.Tensor Examples
>>> w = torch.empty(3, 16, 5, 5) >>> nn.init.dirac_(w)
-
torch.nn.init.
xavier_uniform_
(tensor, gain=1)[source]¶ Fills the input Tensor with values according to the method described in “Understanding the difficulty of training deep feedforward neural networks” - Glorot, X. & Bengio, Y. (2010), using a uniform distribution. The resulting tensor will have values sampled from \mathcal{U}(-a, a) where
a = \text{gain} \times \sqrt{\frac{6}{\text{fan_in} + \text{fan_out}}}Also known as Glorot initialization.
Parameters: - tensor – an n-dimensional torch.Tensor
- gain – an optional scaling factor
Examples
>>> w = torch.empty(3, 5) >>> nn.init.xavier_uniform_(w, gain=nn.init.calculate_gain('relu'))
-
torch.nn.init.
xavier_normal_
(tensor, gain=1)[source]¶ Fills the input Tensor with values according to the method described in “Understanding the difficulty of training deep feedforward neural networks” - Glorot, X. & Bengio, Y. (2010), using a normal distribution. The resulting tensor will have values sampled from \mathcal{N}(0, \text{std}) where
\text{std} = \text{gain} \times \sqrt{\frac{2}{\text{fan_in} + \text{fan_out}}}Also known as Glorot initialization.
Parameters: - tensor – an n-dimensional torch.Tensor
- gain – an optional scaling factor
Examples
>>> w = torch.empty(3, 5) >>> nn.init.xavier_normal_(w)
-
torch.nn.init.
kaiming_uniform_
(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')[source]¶ Fills the input Tensor with values according to the method described in “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification” - He, K. et al. (2015), using a uniform distribution. The resulting tensor will have values sampled from \mathcal{U}(-\text{bound}, \text{bound}) where
\text{bound} = \sqrt{\frac{6}{(1 + a^2) \times \text{fan_in}}}Also known as He initialization.
Parameters: - tensor – an n-dimensional torch.Tensor
- a – the negative slope of the rectifier used after this layer (0 for ReLU by default)
- mode – either ‘fan_in’ (default) or ‘fan_out’. Choosing fan_in preserves the magnitude of the variance of the weights in the forward pass. Choosing fan_out preserves the magnitudes in the backwards pass.
- nonlinearity – the non-linear function (nn.functional name), recommended to use only with ‘relu’ or ‘leaky_relu’ (default).
Examples
>>> w = torch.empty(3, 5) >>> nn.init.kaiming_uniform_(w, mode='fan_in', nonlinearity='relu')
-
torch.nn.init.
kaiming_normal_
(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')[source]¶ Fills the input Tensor with values according to the method described in “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification” - He, K. et al. (2015), using a normal distribution. The resulting tensor will have values sampled from \mathcal{N}(0, \text{std}) where
\text{std} = \sqrt{\frac{2}{(1 + a^2) \times \text{fan_in}}}Also known as He initialization.
Parameters: - tensor – an n-dimensional torch.Tensor
- a – the negative slope of the rectifier used after this layer (0 for ReLU by default)
- mode – either ‘fan_in’ (default) or ‘fan_out’. Choosing fan_in preserves the magnitude of the variance of the weights in the forward pass. Choosing fan_out preserves the magnitudes in the backwards pass.
- nonlinearity – the non-linear function (nn.functional name), recommended to use only with ‘relu’ or ‘leaky_relu’ (default).
Examples
>>> w = torch.empty(3, 5) >>> nn.init.kaiming_normal_(w, mode='fan_out', nonlinearity='relu')
-
torch.nn.init.
orthogonal_
(tensor, gain=1)[source]¶ Fills the input Tensor with a (semi) orthogonal matrix, as described in “Exact solutions to the nonlinear dynamics of learning in deep linear neural networks” - Saxe, A. et al. (2013). The input tensor must have at least 2 dimensions, and for tensors with more than 2 dimensions the trailing dimensions are flattened.
Parameters: - tensor – an n-dimensional torch.Tensor, where n \geq 2
- gain – optional scaling factor
Examples
>>> w = torch.empty(3, 5) >>> nn.init.orthogonal_(w)
-
torch.nn.init.
sparse_
(tensor, sparsity, std=0.01)[source]¶ Fills the 2D input Tensor as a sparse matrix, where the non-zero elements will be drawn from the normal distribution \mathcal{N}(0, 0.01), as described in “Deep learning via Hessian-free optimization” - Martens, J. (2010).
Parameters: - tensor – an n-dimensional torch.Tensor
- sparsity – The fraction of elements in each column to be set to zero
- std – the standard deviation of the normal distribution used to generate the non-zero values
Examples
>>> w = torch.empty(3, 5) >>> nn.init.sparse_(w, sparsity=0.1)