AutoAugment¶
- class torchvision.transforms.v2.AutoAugment(policy: AutoAugmentPolicy = AutoAugmentPolicy.IMAGENET, interpolation: Union[InterpolationMode, int] = InterpolationMode.NEAREST, fill: Union[int, float, Sequence[int], Sequence[float], None, Dict[Union[Type, str], Optional[Union[int, float, Sequence[int], Sequence[float]]]]] = None)[source]¶
AutoAugment data augmentation method based on “AutoAugment: Learning Augmentation Strategies from Data”.
This transformation works on images and videos only.
If the input is
torch.Tensor
, it should be of typetorch.uint8
, and it is expected to have […, 1 or 3, H, W] shape, where … means an arbitrary number of leading dimensions. If img is PIL Image, it is expected to be in mode “L” or “RGB”.- Parameters:
policy (AutoAugmentPolicy, optional) – Desired policy enum defined by
torchvision.transforms.autoaugment.AutoAugmentPolicy
. Default isAutoAugmentPolicy.IMAGENET
.interpolation (InterpolationMode, optional) – Desired interpolation enum defined by
torchvision.transforms.InterpolationMode
. Default isInterpolationMode.NEAREST
. If input is Tensor, onlyInterpolationMode.NEAREST
,InterpolationMode.BILINEAR
are supported.fill (sequence or number, optional) – Pixel fill value for the area outside the transformed image. If given a number, the value is used for all bands respectively.
Examples using
AutoAugment
:Illustration of transforms- forward(*inputs: Any) Any [source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.