MaxPool1d¶
- class torch.nn.MaxPool1d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)[source][source]¶
Applies a 1D max pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size and output can be precisely described as:
If
padding
is non-zero, then the input is implicitly padded with negative infinity on both sides forpadding
number of points.dilation
is the stride between the elements within the sliding window. This link has a nice visualization of the pooling parameters.Note
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
- Parameters
kernel_size (Union[int, Tuple[int]]) – The size of the sliding window, must be > 0.
stride (Union[int, Tuple[int]]) – The stride of the sliding window, must be > 0. Default value is
kernel_size
.padding (Union[int, Tuple[int]]) – Implicit negative infinity padding to be added on both sides, must be >= 0 and <= kernel_size / 2.
dilation (Union[int, Tuple[int]]) – The stride between elements within a sliding window, must be > 0.
return_indices (bool) – If
True
, will return the argmax along with the max values. Useful fortorch.nn.MaxUnpool1d
laterceil_mode (bool) – If
True
, will use ceil instead of floor to compute the output shape. This ensures that every element in the input tensor is covered by a sliding window.
- Shape:
Input: or .
Output: or , where
Examples:
>>> # pool of size=3, stride=2 >>> m = nn.MaxPool1d(3, stride=2) >>> input = torch.randn(20, 16, 50) >>> output = m(input)