TrivialAugmentWide¶
- class torchvision.transforms.v2.TrivialAugmentWide(num_magnitude_bins: int = 31, interpolation: Union[InterpolationMode, int] = InterpolationMode.NEAREST, fill: Union[int, float, Sequence[int], Sequence[float], None, Dict[Union[Type, str], Optional[Union[int, float, Sequence[int], Sequence[float]]]]] = None)[source]¶
[BETA] Dataset-independent data-augmentation with TrivialAugment Wide, as described in “TrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation”.
Note
The TrivialAugmentWide transform is in Beta stage, and while we do not expect disruptive breaking changes, some APIs may slightly change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753.
This transformation works on images and videos only.
If the input is
torch.Tensor
, it should be of typetorch.uint8
, and it is expected to have […, 1 or 3, H, W] shape, where … means an arbitrary number of leading dimensions. If img is PIL Image, it is expected to be in mode “L” or “RGB”.- Parameters:
num_magnitude_bins (int, optional) – The number of different magnitude values.
interpolation (InterpolationMode, optional) – Desired interpolation enum defined by
torchvision.transforms.InterpolationMode
. Default isInterpolationMode.NEAREST
. If input is Tensor, onlyInterpolationMode.NEAREST
,InterpolationMode.BILINEAR
are supported.fill (sequence or number, optional) – Pixel fill value for the area outside the transformed image. If given a number, the value is used for all bands respectively.
Examples using
TrivialAugmentWide
:Illustration of transforms- forward(*inputs: Any) Any [source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.