CenterCrop¶
- class torchvision.transforms.v2.CenterCrop(size: Union[int, Sequence[int]])[source]¶
[BETA] Crop the input at the center.
Note
The CenterCrop transform is in Beta stage, and while we do not expect disruptive breaking changes, some APIs may slightly change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753.
If the input is a
torch.Tensor
or aTVTensor
(e.g.Image
,Video
,BoundingBoxes
etc.) it can have arbitrary number of leading batch dimensions. For example, the image can have[..., C, H, W]
shape. A bounding box can have[..., 4]
shape.If image size is smaller than output size along any edge, image is padded with 0 and then center cropped.
- Parameters:
size (sequence or int) – Desired output size of the crop. If size is an int instead of sequence like (h, w), a square crop (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
Examples using
CenterCrop
:Illustration of transforms