Shortcuts

torch.nn

These are the basic building blocks for graphs:

Parameter

A kind of Tensor that is to be considered a module parameter.

UninitializedParameter

A parameter that is not initialized.

UninitializedBuffer

A buffer that is not initialized.

Containers

Module

Base class for all neural network modules.

Sequential

A sequential container.

ModuleList

Holds submodules in a list.

ModuleDict

Holds submodules in a dictionary.

ParameterList

Holds parameters in a list.

ParameterDict

Holds parameters in a dictionary.

Global Hooks For Module

register_module_forward_pre_hook

Register a forward pre-hook common to all modules.

register_module_forward_hook

Register a global forward hook for all the modules.

register_module_backward_hook

Register a backward hook common to all the modules.

register_module_full_backward_pre_hook

Register a backward pre-hook common to all the modules.

register_module_full_backward_hook

Register a backward hook common to all the modules.

register_module_buffer_registration_hook

Register a buffer registration hook common to all modules.

register_module_module_registration_hook

Register a module registration hook common to all modules.

register_module_parameter_registration_hook

Register a parameter registration hook common to all modules.

Convolution Layers

nn.Conv1d

Applies a 1D convolution over an input signal composed of several input planes.

nn.Conv2d

Applies a 2D convolution over an input signal composed of several input planes.

nn.Conv3d

Applies a 3D convolution over an input signal composed of several input planes.

nn.ConvTranspose1d

Applies a 1D transposed convolution operator over an input image composed of several input planes.

nn.ConvTranspose2d

Applies a 2D transposed convolution operator over an input image composed of several input planes.

nn.ConvTranspose3d

Applies a 3D transposed convolution operator over an input image composed of several input planes.

nn.LazyConv1d

A torch.nn.Conv1d module with lazy initialization of the in_channels argument.

nn.LazyConv2d

A torch.nn.Conv2d module with lazy initialization of the in_channels argument.

nn.LazyConv3d

A torch.nn.Conv3d module with lazy initialization of the in_channels argument.

nn.LazyConvTranspose1d

A torch.nn.ConvTranspose1d module with lazy initialization of the in_channels argument.

nn.LazyConvTranspose2d

A torch.nn.ConvTranspose2d module with lazy initialization of the in_channels argument.

nn.LazyConvTranspose3d

A torch.nn.ConvTranspose3d module with lazy initialization of the in_channels argument.

nn.Unfold

Extracts sliding local blocks from a batched input tensor.

nn.Fold

Combines an array of sliding local blocks into a large containing tensor.

Pooling layers

nn.MaxPool1d

Applies a 1D max pooling over an input signal composed of several input planes.

nn.MaxPool2d

Applies a 2D max pooling over an input signal composed of several input planes.

nn.MaxPool3d

Applies a 3D max pooling over an input signal composed of several input planes.

nn.MaxUnpool1d

Computes a partial inverse of MaxPool1d.

nn.MaxUnpool2d

Computes a partial inverse of MaxPool2d.

nn.MaxUnpool3d

Computes a partial inverse of MaxPool3d.

nn.AvgPool1d

Applies a 1D average pooling over an input signal composed of several input planes.

nn.AvgPool2d

Applies a 2D average pooling over an input signal composed of several input planes.

nn.AvgPool3d

Applies a 3D average pooling over an input signal composed of several input planes.

nn.FractionalMaxPool2d

Applies a 2D fractional max pooling over an input signal composed of several input planes.

nn.FractionalMaxPool3d

Applies a 3D fractional max pooling over an input signal composed of several input planes.

nn.LPPool1d

Applies a 1D power-average pooling over an input signal composed of several input planes.

nn.LPPool2d

Applies a 2D power-average pooling over an input signal composed of several input planes.

nn.AdaptiveMaxPool1d

Applies a 1D adaptive max pooling over an input signal composed of several input planes.

nn.AdaptiveMaxPool2d

Applies a 2D adaptive max pooling over an input signal composed of several input planes.

nn.AdaptiveMaxPool3d

Applies a 3D adaptive max pooling over an input signal composed of several input planes.

nn.AdaptiveAvgPool1d

Applies a 1D adaptive average pooling over an input signal composed of several input planes.

nn.AdaptiveAvgPool2d

Applies a 2D adaptive average pooling over an input signal composed of several input planes.

nn.AdaptiveAvgPool3d

Applies a 3D adaptive average pooling over an input signal composed of several input planes.

Padding Layers

nn.ReflectionPad1d

Pads the input tensor using the reflection of the input boundary.

nn.ReflectionPad2d

Pads the input tensor using the reflection of the input boundary.

nn.ReflectionPad3d

Pads the input tensor using the reflection of the input boundary.

nn.ReplicationPad1d

Pads the input tensor using replication of the input boundary.

nn.ReplicationPad2d

Pads the input tensor using replication of the input boundary.

nn.ReplicationPad3d

Pads the input tensor using replication of the input boundary.

nn.ZeroPad1d

Pads the input tensor boundaries with zero.

nn.ZeroPad2d

Pads the input tensor boundaries with zero.

nn.ZeroPad3d

Pads the input tensor boundaries with zero.

nn.ConstantPad1d

Pads the input tensor boundaries with a constant value.

nn.ConstantPad2d

Pads the input tensor boundaries with a constant value.

nn.ConstantPad3d

Pads the input tensor boundaries with a constant value.

nn.CircularPad1d

Pads the input tensor using circular padding of the input boundary.

nn.CircularPad2d

Pads the input tensor using circular padding of the input boundary.

nn.CircularPad3d

Pads the input tensor using circular padding of the input boundary.

Non-linear Activations (weighted sum, nonlinearity)

nn.ELU

Applies the Exponential Linear Unit (ELU) function, element-wise, as described in the paper: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs).

nn.Hardshrink

Applies the Hard Shrinkage (Hardshrink) function element-wise.

nn.Hardsigmoid

Applies the Hardsigmoid function element-wise.

nn.Hardtanh

Applies the HardTanh function element-wise.

nn.Hardswish

Applies the Hardswish function, element-wise, as described in the paper: Searching for MobileNetV3.

nn.LeakyReLU

Applies the element-wise function:

nn.LogSigmoid

Applies the element-wise function:

nn.MultiheadAttention

Allows the model to jointly attend to information from different representation subspaces as described in the paper: Attention Is All You Need.

nn.PReLU

Applies the element-wise function:

nn.ReLU

Applies the rectified linear unit function element-wise:

nn.ReLU6

Applies the element-wise function:

nn.RReLU

Applies the randomized leaky rectified linear unit function, element-wise, as described in the paper:

nn.SELU

Applied element-wise, as:

nn.CELU

Applies the element-wise function:

nn.GELU

Applies the Gaussian Error Linear Units function:

nn.Sigmoid

Applies the element-wise function:

nn.SiLU

Applies the Sigmoid Linear Unit (SiLU) function, element-wise.

nn.Mish

Applies the Mish function, element-wise.

nn.Softplus

Applies the Softplus function Softplus(x)=1βlog(1+exp(βx))\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)) element-wise.

nn.Softshrink

Applies the soft shrinkage function elementwise:

nn.Softsign

Applies the element-wise function:

nn.Tanh

Applies the Hyperbolic Tangent (Tanh) function element-wise.

nn.Tanhshrink

Applies the element-wise function:

nn.Threshold

Thresholds each element of the input Tensor.

nn.GLU

Applies the gated linear unit function GLU(a,b)=aσ(b){GLU}(a, b)= a \otimes \sigma(b) where aa is the first half of the input matrices and bb is the second half.

Non-linear Activations (other)

nn.Softmin

Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0, 1] and sum to 1.

nn.Softmax

Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1.

nn.Softmax2d

Applies SoftMax over features to each spatial location.

nn.LogSoftmax

Applies the log(Softmax(x))\log(\text{Softmax}(x)) function to an n-dimensional input Tensor.

nn.AdaptiveLogSoftmaxWithLoss

Efficient softmax approximation.

Normalization Layers

nn.BatchNorm1d

Applies Batch Normalization over a 2D or 3D input as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .

nn.BatchNorm2d

Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .

nn.BatchNorm3d

Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .

nn.LazyBatchNorm1d

A torch.nn.BatchNorm1d module with lazy initialization of the num_features argument of the BatchNorm1d that is inferred from the input.size(1).

nn.LazyBatchNorm2d

A torch.nn.BatchNorm2d module with lazy initialization of the num_features argument of the BatchNorm2d that is inferred from the input.size(1).

nn.LazyBatchNorm3d

A torch.nn.BatchNorm3d module with lazy initialization of the num_features argument of the BatchNorm3d that is inferred from the input.size(1).

nn.GroupNorm

Applies Group Normalization over a mini-batch of inputs.

nn.SyncBatchNorm

Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .

nn.InstanceNorm1d

Applies Instance Normalization.

nn.InstanceNorm2d

Applies Instance Normalization.

nn.InstanceNorm3d

Applies Instance Normalization.

nn.LazyInstanceNorm1d

A torch.nn.InstanceNorm1d module with lazy initialization of the num_features argument.

nn.LazyInstanceNorm2d

A torch.nn.InstanceNorm2d module with lazy initialization of the num_features argument.

nn.LazyInstanceNorm3d

A torch.nn.InstanceNorm3d module with lazy initialization of the num_features argument.

nn.LayerNorm

Applies Layer Normalization over a mini-batch of inputs.

nn.LocalResponseNorm

Applies local response normalization over an input signal.

Recurrent Layers

nn.RNNBase

Base class for RNN modules (RNN, LSTM, GRU).

nn.RNN

Apply a multi-layer Elman RNN with tanh\tanh or ReLU\text{ReLU} non-linearity to an input sequence.

nn.LSTM

Apply a multi-layer long short-term memory (LSTM) RNN to an input sequence.

nn.GRU

Apply a multi-layer gated recurrent unit (GRU) RNN to an input sequence.

nn.RNNCell

An Elman RNN cell with tanh or ReLU non-linearity.

nn.LSTMCell

A long short-term memory (LSTM) cell.

nn.GRUCell

A gated recurrent unit (GRU) cell.

Transformer Layers

nn.Transformer

A transformer model.

nn.TransformerEncoder

TransformerEncoder is a stack of N encoder layers.

nn.TransformerDecoder

TransformerDecoder is a stack of N decoder layers.

nn.TransformerEncoderLayer

TransformerEncoderLayer is made up of self-attn and feedforward network.

nn.TransformerDecoderLayer

TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network.

Linear Layers

nn.Identity

A placeholder identity operator that is argument-insensitive.

nn.Linear

Applies a linear transformation to the incoming data: y=xAT+by = xA^T + b.

nn.Bilinear

Applies a bilinear transformation to the incoming data: y=x1TAx2+by = x_1^T A x_2 + b.

nn.LazyLinear

A torch.nn.Linear module where in_features is inferred.

Dropout Layers

nn.Dropout

During training, randomly zeroes some of the elements of the input tensor with probability p.

nn.Dropout1d

Randomly zero out entire channels.

nn.Dropout2d

Randomly zero out entire channels.

nn.Dropout3d

Randomly zero out entire channels.

nn.AlphaDropout

Applies Alpha Dropout over the input.

nn.FeatureAlphaDropout

Randomly masks out entire channels.

Sparse Layers

nn.Embedding

A simple lookup table that stores embeddings of a fixed dictionary and size.

nn.EmbeddingBag

Compute sums or means of 'bags' of embeddings, without instantiating the intermediate embeddings.

Distance Functions

nn.CosineSimilarity

Returns cosine similarity between x1x_1 and x2x_2, computed along dim.

nn.PairwiseDistance

Computes the pairwise distance between input vectors, or between columns of input matrices.

Loss Functions

nn.L1Loss

Creates a criterion that measures the mean absolute error (MAE) between each element in the input xx and target yy.

nn.MSELoss

Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input xx and target yy.

nn.CrossEntropyLoss

This criterion computes the cross entropy loss between input logits and target.

nn.CTCLoss

The Connectionist Temporal Classification loss.

nn.NLLLoss

The negative log likelihood loss.

nn.PoissonNLLLoss

Negative log likelihood loss with Poisson distribution of target.

nn.GaussianNLLLoss

Gaussian negative log likelihood loss.

nn.KLDivLoss

The Kullback-Leibler divergence loss.

nn.BCELoss

Creates a criterion that measures the Binary Cross Entropy between the target and the input probabilities:

nn.BCEWithLogitsLoss

This loss combines a Sigmoid layer and the BCELoss in one single class.

nn.MarginRankingLoss

Creates a criterion that measures the loss given inputs x1x1, x2x2, two 1D mini-batch or 0D Tensors, and a label 1D mini-batch or 0D Tensor yy (containing 1 or -1).

nn.HingeEmbeddingLoss

Measures the loss given an input tensor xx and a labels tensor yy (containing 1 or -1).

nn.MultiLabelMarginLoss

Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input xx (a 2D mini-batch Tensor) and output yy (which is a 2D Tensor of target class indices).

nn.HuberLoss

Creates a criterion that uses a squared term if the absolute element-wise error falls below delta and a delta-scaled L1 term otherwise.

nn.SmoothL1Loss

Creates a criterion that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise.

nn.SoftMarginLoss

Creates a criterion that optimizes a two-class classification logistic loss between input tensor xx and target tensor yy (containing 1 or -1).

nn.MultiLabelSoftMarginLoss

Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input xx and target yy of size (N,C)(N, C).

nn.CosineEmbeddingLoss

Creates a criterion that measures the loss given input tensors x1x_1, x2x_2 and a Tensor label yy with values 1 or -1.

nn.MultiMarginLoss

Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input xx (a 2D mini-batch Tensor) and output yy (which is a 1D tensor of target class indices, 0yx.size(1)10 \leq y \leq \text{x.size}(1)-1):

nn.TripletMarginLoss

Creates a criterion that measures the triplet loss given an input tensors x1x1, x2x2, x3x3 and a margin with a value greater than 00.

nn.TripletMarginWithDistanceLoss

Creates a criterion that measures the triplet loss given input tensors aa, pp, and nn (representing anchor, positive, and negative examples, respectively), and a nonnegative, real-valued function ("distance function") used to compute the relationship between the anchor and positive example ("positive distance") and the anchor and negative example ("negative distance").

Vision Layers

nn.PixelShuffle

Rearrange elements in a tensor according to an upscaling factor.

nn.PixelUnshuffle

Reverse the PixelShuffle operation.

nn.Upsample

Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.

nn.UpsamplingNearest2d

Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels.

nn.UpsamplingBilinear2d

Applies a 2D bilinear upsampling to an input signal composed of several input channels.

Shuffle Layers

nn.ChannelShuffle

Divides and rearranges the channels in a tensor.

DataParallel Layers (multi-GPU, distributed)

nn.DataParallel

Implements data parallelism at the module level.

nn.parallel.DistributedDataParallel

Implement distributed data parallelism based on torch.distributed at module level.

Utilities

From the torch.nn.utils module:

Utility functions to clip parameter gradients.

clip_grad_norm_

Clip the gradient norm of an iterable of parameters.

clip_grad_norm

Clip the gradient norm of an iterable of parameters.

clip_grad_value_

Clip the gradients of an iterable of parameters at specified value.

Utility functions to flatten and unflatten Module parameters to and from a single vector.

parameters_to_vector

Flatten an iterable of parameters into a single vector.

vector_to_parameters

Copy slices of a vector into an iterable of parameters.

Utility functions to fuse Modules with BatchNorm modules.

fuse_conv_bn_eval

Fuse a convolutional module and a BatchNorm module into a single, new convolutional module.

fuse_conv_bn_weights

Fuse convolutional module parameters and BatchNorm module parameters into new convolutional module parameters.

fuse_linear_bn_eval

Fuse a linear module and a BatchNorm module into a single, new linear module.

fuse_linear_bn_weights

Fuse linear module parameters and BatchNorm module parameters into new linear module parameters.

Utility functions to convert Module parameter memory formats.

convert_conv2d_weight_memory_format

Convert memory_format of nn.Conv2d.weight to memory_format.

Utility functions to apply and remove weight normalization from Module parameters.

weight_norm

Apply weight normalization to a parameter in the given module.

remove_weight_norm

Remove the weight normalization reparameterization from a module.

spectral_norm

Apply spectral normalization to a parameter in the given module.

remove_spectral_norm

Remove the spectral normalization reparameterization from a module.

Utility functions for initializing Module parameters.

skip_init

Given a module class object and args / kwargs, instantiate the module without initializing parameters / buffers.

Utility classes and functions for pruning Module parameters.

prune.BasePruningMethod

Abstract base class for creation of new pruning techniques.

prune.PruningContainer

Container holding a sequence of pruning methods for iterative pruning.

prune.Identity

Utility pruning method that does not prune any units but generates the pruning parametrization with a mask of ones.

prune.RandomUnstructured

Prune (currently unpruned) units in a tensor at random.

prune.L1Unstructured

Prune (currently unpruned) units in a tensor by zeroing out the ones with the lowest L1-norm.

prune.RandomStructured

Prune entire (currently unpruned) channels in a tensor at random.

prune.LnStructured

Prune entire (currently unpruned) channels in a tensor based on their Ln-norm.

prune.CustomFromMask

prune.identity

Apply pruning reparametrization without pruning any units.

prune.random_unstructured

Prune tensor by removing random (currently unpruned) units.

prune.l1_unstructured

Prune tensor by removing units with the lowest L1-norm.

prune.random_structured

Prune tensor by removing random channels along the specified dimension.

prune.ln_structured

Prune tensor by removing channels with the lowest Ln-norm along the specified dimension.

prune.global_unstructured

Globally prunes tensors corresponding to all parameters in parameters by applying the specified pruning_method.

prune.custom_from_mask

Prune tensor corresponding to parameter called name in module by applying the pre-computed mask in mask.

prune.remove

Remove the pruning reparameterization from a module and the pruning method from the forward hook.

prune.is_pruned

Check if a module is pruned by looking for pruning pre-hooks.

Parametrizations implemented using the new parametrization functionality in torch.nn.utils.parameterize.register_parametrization().

parametrizations.orthogonal

Apply an orthogonal or unitary parametrization to a matrix or a batch of matrices.

parametrizations.weight_norm

Apply weight normalization to a parameter in the given module.

parametrizations.spectral_norm

Apply spectral normalization to a parameter in the given module.

Utility functions to parametrize Tensors on existing Modules. Note that these functions can be used to parametrize a given Parameter or Buffer given a specific function that maps from an input space to the parametrized space. They are not parameterizations that would transform an object into a parameter. See the Parametrizations tutorial for more information on how to implement your own parametrizations.

parametrize.register_parametrization

Register a parametrization to a tensor in a module.

parametrize.remove_parametrizations

Remove the parametrizations on a tensor in a module.

parametrize.cached

Context manager that enables the caching system within parametrizations registered with register_parametrization().

parametrize.is_parametrized

Determine if a module has a parametrization.

parametrize.ParametrizationList

A sequential container that holds and manages the original parameters or buffers of a parametrized torch.nn.Module.

Utility functions to call a given Module in a stateless manner.

stateless.functional_call

Perform a functional call on the module by replacing the module parameters and buffers with the provided ones.

Utility functions in other modules

nn.utils.rnn.PackedSequence

Holds the data and list of batch_sizes of a packed sequence.

nn.utils.rnn.pack_padded_sequence

Packs a Tensor containing padded sequences of variable length.

nn.utils.rnn.pad_packed_sequence

Pad a packed batch of variable length sequences.

nn.utils.rnn.pad_sequence

Pad a list of variable length Tensors with padding_value.

nn.utils.rnn.pack_sequence

Packs a list of variable length Tensors.

nn.utils.rnn.unpack_sequence

Unpack PackedSequence into a list of variable length Tensors.

nn.utils.rnn.unpad_sequence

Unpad padded Tensor into a list of variable length Tensors.

nn.Flatten

Flattens a contiguous range of dims into a tensor.

nn.Unflatten

Unflattens a tensor dim expanding it to a desired shape.

Quantized Functions

Quantization refers to techniques for performing computations and storing tensors at lower bitwidths than floating point precision. PyTorch supports both per tensor and per channel asymmetric linear quantization. To learn more how to use quantized functions in PyTorch, please refer to the Quantization documentation.

Lazy Modules Initialization

nn.modules.lazy.LazyModuleMixin

A mixin for modules that lazily initialize parameters, also known as "lazy modules".

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources