BoundingBox¶
- class torchvision.datapoints.BoundingBox(data: Any, *, format: Union[BoundingBoxFormat, str], spatial_size: Tuple[int, int], dtype: Optional[dtype] = None, device: Optional[Union[device, str, int]] = None, requires_grad: Optional[bool] = None)[source]¶
[BETA]
torch.Tensor
subclass for bounding boxes.- Parameters:
data – Any data that can be turned into a tensor with
torch.as_tensor()
.format (BoundingBoxFormat, str) – Format of the bounding box.
spatial_size (two-tuple of python:ints) – Height and width of the corresponding image or video.
dtype (torch.dpython:type, optional) – Desired data type of the bounding box. If omitted, will be inferred from
data
.device (torch.device, optional) – Desired device of the bounding box. If omitted and
data
is atorch.Tensor
, the device is taken from it. Otherwise, the bounding box is constructed on the CPU.requires_grad (bool, optional) – Whether autograd should record operations on the bounding box. If omitted and
data
is atorch.Tensor
, the value is taken from it. Otherwise, defaults toFalse
.
Examples using
BoundingBox
:Datapoints FAQGetting started with transforms v2
Getting started with transforms v2- classmethod wrap_like(other: BoundingBox, tensor: Tensor, *, format: Optional[BoundingBoxFormat] = None, spatial_size: Optional[Tuple[int, int]] = None) BoundingBox [source]¶
Wrap a
torch.Tensor
asBoundingBox
from a reference.- Parameters:
other (BoundingBox) – Reference bounding box.
tensor (Tensor) – Tensor to be wrapped as
BoundingBox
format (BoundingBoxFormat, str, optional) – Format of the bounding box. If omitted, it is taken from the reference.
spatial_size (two-tuple of python:ints, optional) – Height and width of the corresponding image or video. If omitted, it is taken from the reference.