torch.nn.functional.pad(input, pad, mode='constant', value=None)Tensor

The padding size by which to pad some dimensions of input are described starting from the last dimension and moving forward. $\left\lfloor\frac{\text{len(pad)}}{2}\right\rfloor$ dimensions of input will be padded. For example, to pad only the last dimension of the input tensor, then pad has the form $(\text{padding\_left}, \text{padding\_right})$; to pad the last 2 dimensions of the input tensor, then use $(\text{padding\_left}, \text{padding\_right},$ $\text{padding\_top}, \text{padding\_bottom})$; to pad the last 3 dimensions, use $(\text{padding\_left}, \text{padding\_right},$ $\text{padding\_top}, \text{padding\_bottom}$ $\text{padding\_front}, \text{padding\_back})$.

See torch.nn.ConstantPad2d, torch.nn.ReflectionPad2d, and torch.nn.ReplicationPad2d for concrete examples on how each of the padding modes works. Constant padding is implemented for arbitrary dimensions. Replicate and reflection padding are implemented for padding the last 3 dimensions of a 4D or 5D input tensor, the last 2 dimensions of a 3D or 4D input tensor, or the last dimension of a 2D or 3D input tensor.

Note

When using the CUDA backend, this operation may induce nondeterministic behaviour in its backward pass that is not easily switched off. Please see the notes on Reproducibility for background.

Parameters
• input (Tensor) – N-dimensional tensor

• pad (tuple) – m-elements tuple, where $\frac{m}{2} \leq$ input dimensions and $m$ is even.

• mode'constant', 'reflect', 'replicate' or 'circular'. Default: 'constant'

• value – fill value for 'constant' padding. Default: 0

Examples:

>>> t4d = torch.empty(3, 3, 4, 2)
>>> p1d = (1, 1) # pad last dim by 1 on each side