# Conv1d¶

class torch.nn.Conv1d(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T]], stride: Union[T, Tuple[T]] = 1, padding: Union[T, Tuple[T]] = 0, dilation: Union[T, Tuple[T]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros')[source]

Applies a 1D convolution over an input signal composed of several input planes.

In the simplest case, the output value of the layer with input size $(N, C_{\text{in}}, L)$ and output $(N, C_{\text{out}}, L_{\text{out}})$ can be precisely described as:

$\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k)$

where $\star$ is the valid cross-correlation operator, $N$ is a batch size, $C$ denotes a number of channels, $L$ is a length of signal sequence.

This module supports TensorFloat32.

• stride controls the stride for the cross-correlation, a single number or a one-element tuple.

• padding controls the amount of implicit padding on both sides for padding number of points.

• dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.

• groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example,

• At groups=1, all inputs are convolved to all outputs.

• At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.

• At groups= in_channels, each input channel is convolved with its own set of filters (of size $\frac{\text{out\_channels}}{\text{in\_channels}}$ ).

Note

When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”.

In other words, for an input of size $(N, C_{in}, L_{in})$ , a depthwise convolution with a depthwise multiplier K can be performed with the arguments $(C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in})$ .

Note

In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information.

Parameters
• in_channels (int) – Number of channels in the input image

• out_channels (int) – Number of channels produced by the convolution

• kernel_size (int or tuple) – Size of the convolving kernel

• stride (int or tuple, optional) – Stride of the convolution. Default: 1

• padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0

• padding_mode (string, optional) – 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'

• dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

• groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

• bias (bool, optional) – If True, adds a learnable bias to the output. Default: True

Shape:
• Input: $(N, C_{in}, L_{in})$

• Output: $(N, C_{out}, L_{out})$ where

$L_{out} = \left\lfloor\frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernel\_size} - 1) - 1}{\text{stride}} + 1\right\rfloor$
Variables
• ~Conv1d.weight (Tensor) – the learnable weights of the module of shape $(\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}}, \text{kernel\_size})$ . The values of these weights are sampled from $\mathcal{U}(-\sqrt{k}, \sqrt{k})$ where $k = \frac{groups}{C_\text{in} * \text{kernel\_size}}$

• ~Conv1d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If bias is True, then the values of these weights are sampled from $\mathcal{U}(-\sqrt{k}, \sqrt{k})$ where $k = \frac{groups}{C_\text{in} * \text{kernel\_size}}$

Examples:

>>> m = nn.Conv1d(16, 33, 3, stride=2)
>>> input = torch.randn(20, 16, 50)
>>> output = m(input)