Shortcuts

AutoAugment

class torchvision.transforms.v2.AutoAugment(policy: AutoAugmentPolicy = AutoAugmentPolicy.IMAGENET, interpolation: Union[InterpolationMode, int] = InterpolationMode.NEAREST, fill: Union[int, float, Sequence[int], Sequence[float], None, Dict[Union[Type, str], Optional[Union[int, float, Sequence[int], Sequence[float]]]]] = None)[source]

AutoAugment data augmentation method based on “AutoAugment: Learning Augmentation Strategies from Data”.

This transformation works on images and videos only.

If the input is torch.Tensor, it should be of type torch.uint8, and it is expected to have […, 1 or 3, H, W] shape, where … means an arbitrary number of leading dimensions. If img is PIL Image, it is expected to be in mode “L” or “RGB”.

Parameters:
  • policy (AutoAugmentPolicy, optional) – Desired policy enum defined by torchvision.transforms.autoaugment.AutoAugmentPolicy. Default is AutoAugmentPolicy.IMAGENET.

  • interpolation (InterpolationMode, optional) – Desired interpolation enum defined by torchvision.transforms.InterpolationMode. Default is InterpolationMode.NEAREST. If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR are supported.

  • fill (sequence or number, optional) – Pixel fill value for the area outside the transformed image. If given a number, the value is used for all bands respectively.

Examples using AutoAugment:

Illustration of transforms

Illustration of transforms
forward(*inputs: Any) Any[source]

Do not override this! Use transform() instead.

static get_params(transform_num: int) Tuple[int, Tensor, Tensor][source]

Get parameters for autoaugment transformation

Returns:

params required by the autoaugment transformation

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources