CenterCrop¶
- class torchvision.transforms.v2.CenterCrop(size: Union[int, Sequence[int]])[source]¶
[BETA] Crop the input at the center.
Warning
The CenterCrop transform is in Beta stage, and while we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes.
If the input is a
torch.Tensor
or aDatapoint
(e.g.Image
,Video
,BoundingBox
etc.) it can have arbitrary number of leading batch dimensions. For example, the image can have[..., C, H, W]
shape. A bounding box can have[..., 4]
shape.If image size is smaller than output size along any edge, image is padded with 0 and then center cropped.
- Parameters:
size (sequence or int) – Desired output size of the crop. If size is an int instead of sequence like (h, w), a square crop (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).
Examples using
CenterCrop
:Getting started with transforms v2
Getting started with transforms v2