Shortcuts

RandomPerspective

class torchvision.transforms.v2.RandomPerspective(distortion_scale: float = 0.5, p: float = 0.5, interpolation: Union[InterpolationMode, int] = InterpolationMode.BILINEAR, fill: Union[int, float, Sequence[int], Sequence[float], None, Dict[Union[Type, str], Optional[Union[int, float, Sequence[int], Sequence[float]]]]] = 0)[source]

Perform a random perspective transformation of the input with a given probability.

If the input is a torch.Tensor or a TVTensor (e.g. Image, Video, BoundingBoxes etc.) it can have arbitrary number of leading batch dimensions. For example, the image can have [..., C, H, W] shape. A bounding box can have [..., 4] shape.

Parameters:
  • distortion_scale (float, optional) – argument to control the degree of distortion and ranges from 0 to 1. Default is 0.5.

  • p (float, optional) – probability of the input being transformed. Default is 0.5.

  • interpolation (InterpolationMode, optional) – Desired interpolation enum defined by torchvision.transforms.InterpolationMode. Default is InterpolationMode.BILINEAR. If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR are supported. The corresponding Pillow integer constants, e.g. PIL.Image.BILINEAR are accepted as well.

  • fill (number or tuple or dict, optional) – Pixel fill value used when the padding_mode is constant. Default is 0. If a tuple of length 3, it is used to fill R, G, B channels respectively. Fill value can be also a dictionary mapping data type to the fill value, e.g. fill={tv_tensors.Image: 127, tv_tensors.Mask: 0} where Image will be filled with 127 and Mask will be filled with 0.

Examples using RandomPerspective:

Illustration of transforms

Illustration of transforms
static get_params(width: int, height: int, distortion_scale: float) Tuple[List[List[int]], List[List[int]]][source]

Get parameters for perspective for a random perspective transform.

Parameters:
  • width (int) – width of the image.

  • height (int) – height of the image.

  • distortion_scale (float) – argument to control the degree of distortion and ranges from 0 to 1.

Returns:

List containing [top-left, top-right, bottom-right, bottom-left] of the original image, List containing [top-left, top-right, bottom-right, bottom-left] of the transformed image.

make_params(flat_inputs: List[Any]) Dict[str, Any][source]

Method to override for custom transforms.

See How to write your own v2 transforms

transform(inpt: Any, params: Dict[str, Any]) Any[source]

Method to override for custom transforms.

See How to write your own v2 transforms

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources