TenCrop¶
- class torchvision.transforms.v2.TenCrop(size: Union[int, Sequence[int]], vertical_flip: bool = False)[source]¶
Crop the image or video into four corners and the central crop plus the flipped version of these (horizontal flipping is used by default).
If the input is a
torch.Tensor
or aImage
or aVideo
it can have arbitrary number of leading batch dimensions. For example, the image can have[..., C, H, W]
shape.See
FiveCrop
for an example.Note
This transform returns a tuple of images and there may be a mismatch in the number of inputs and targets your Dataset returns. See below for an example of how to deal with this.
- Parameters:
size (sequence or int) – Desired output size of the crop. If size is an int instead of sequence like (h, w), a square crop (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
vertical_flip (bool, optional) – Use vertical flipping instead of horizontal