RandomAffine¶
- class torchvision.transforms.v2.RandomAffine(degrees: Union[Number, Sequence], translate: Optional[Sequence[float]] = None, scale: Optional[Sequence[float]] = None, shear: Optional[Union[int, float, Sequence[float]]] = None, interpolation: Union[InterpolationMode, int] = InterpolationMode.NEAREST, fill: Union[int, float, Sequence[int], Sequence[float], None, Dict[Type, Optional[Union[int, float, Sequence[int], Sequence[float]]]]] = 0, center: Optional[List[float]] = None)[source]¶
[BETA] Random affine transformation the input keeping center invariant.
Warning
The RandomAffine transform is in Beta stage, and while we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes.
If the input is a
torch.Tensor
or aDatapoint
(e.g.Image
,Video
,BoundingBox
etc.) it can have arbitrary number of leading batch dimensions. For example, the image can have[..., C, H, W]
shape. A bounding box can have[..., 4]
shape.- Parameters:
degrees (sequence or number) – Range of degrees to select from. If degrees is a number instead of sequence like (min, max), the range of degrees will be (-degrees, +degrees). Set to 0 to deactivate rotations.
translate (tuple, optional) – tuple of maximum absolute fraction for horizontal and vertical translations. For example translate=(a, b), then horizontal shift is randomly sampled in the range -img_width * a < dx < img_width * a and vertical shift is randomly sampled in the range -img_height * b < dy < img_height * b. Will not translate by default.
scale (tuple, optional) – scaling factor interval, e.g (a, b), then scale is randomly sampled from the range a <= scale <= b. Will keep original scale by default.
shear (sequence or number, optional) – Range of degrees to select from. If shear is a number, a shear parallel to the x-axis in the range (-shear, +shear) will be applied. Else if shear is a sequence of 2 values a shear parallel to the x-axis in the range (shear[0], shear[1]) will be applied. Else if shear is a sequence of 4 values, an x-axis shear in (shear[0], shear[1]) and y-axis shear in (shear[2], shear[3]) will be applied. Will not apply shear by default.
interpolation (InterpolationMode, optional) – Desired interpolation enum defined by
torchvision.transforms.InterpolationMode
. Default isInterpolationMode.NEAREST
. If input is Tensor, onlyInterpolationMode.NEAREST
,InterpolationMode.BILINEAR
are supported. The corresponding Pillow integer constants, e.g.PIL.Image.BILINEAR
are accepted as well.fill (number or tuple or dict, optional) – Pixel fill value used when the
padding_mode
is constant. Default is 0. If a tuple of length 3, it is used to fill R, G, B channels respectively. Fill value can be also a dictionary mapping data type to the fill value, e.g.fill={datapoints.Image: 127, datapoints.Mask: 0}
whereImage
will be filled with 127 andMask
will be filled with 0.center (sequence, optional) – Optional center of rotation, (x, y). Origin is the upper left corner. Default is the center of the image.
- static get_params(degrees: List[float], translate: Optional[List[float]], scale_ranges: Optional[List[float]], shears: Optional[List[float]], img_size: List[int]) Tuple[float, Tuple[int, int], float, Tuple[float, float]] [source]¶
Get parameters for affine transformation
- Returns:
params to be passed to the affine transformation