- class torchvision.transforms.v2.RandomResize(min_size: int, max_size: int, interpolation: Union[InterpolationMode, int] = InterpolationMode.BILINEAR, antialias: Optional[bool] = True)¶
[BETA] Randomly resize the input.
The RandomResize transform is in Beta stage, and while we do not expect disruptive breaking changes, some APIs may slightly change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753.
This transformation can be used together with
RandomCropas data augmentations to train models on image segmentation task.
Output spatial size is randomly sampled from the interval
size = uniform_sample(min_size, max_size) output_width = size output_height = size
If the input is a
BoundingBoxesetc.) it can have arbitrary number of leading batch dimensions. For example, the image can have
[..., C, H, W]shape. A bounding box can have
min_size (int) – Minimum output size for random sampling
max_size (int) – Maximum output size for random sampling
interpolation (InterpolationMode, optional) – Desired interpolation enum defined by
torchvision.transforms.InterpolationMode. Default is
InterpolationMode.BILINEAR. If input is Tensor, only
InterpolationMode.BICUBICare supported. The corresponding Pillow integer constants, e.g.
PIL.Image.BILINEARare accepted as well.
antialias (bool, optional) –
Whether to apply antialiasing. It only affects tensors with bilinear or bicubic modes and it is ignored otherwise: on PIL images, antialiasing is always applied on bilinear or bicubic modes; on other modes (for PIL images and tensors), antialiasing makes no sense and this parameter is ignored. Possible values are:
True(default): will apply antialiasing for bilinear or bicubic modes. Other mode aren’t affected. This is probably what you want to use.
False: will not apply antialiasing for tensors on any mode. PIL images are still antialiased on bilinear or bicubic modes, because PIL doesn’t support no antialias.
None: equivalent to
Falsefor tensors and
Truefor PIL images. This value exists for legacy reasons and you probably don’t want to use it unless you really know what you are doing.
The default value changed from
Truein v0.17, for the PIL and Tensor backends to be consistent.