DeformConv2d¶
-
class
torchvision.ops.
DeformConv2d
(in_channels: int, out_channels: int, kernel_size: int, stride: int = 1, padding: int = 0, dilation: int = 1, groups: int = 1, bias: bool = True)[source]¶ See
deform_conv2d()
.-
forward
(input: torch.Tensor, offset: torch.Tensor, mask: Optional[torch.Tensor] = None) → torch.Tensor[source]¶ - Parameters
input (Tensor[batch_size, in_channels, in_height, in_width]) – input tensor
offset (Tensor[batch_size, 2 * offset_groups * kernel_height * kernel_width, out_height, out_width]) – offsets to be applied for each position in the convolution kernel.
mask (Tensor[batch_size, offset_groups * kernel_height * kernel_width, out_height, out_width]) – masks to be applied for each position in the convolution kernel.
-