[docs]classBoundingBoxFormat(Enum):"""Coordinate format of a bounding box. Available formats are: * ``XYXY`` * ``XYWH`` * ``CXCYWH`` * ``XYWHR``: rotated boxes represented via corner, width and height, x1, y1 being top left, w, h being width and height. r is rotation angle in degrees. * ``CXCYWHR``: rotated boxes represented via centre, width and height, cx, cy being center of box, w, h being width and height. r is rotation angle in degrees. * ``XYXYXYXY``: rotated boxes represented via corners, x1, y1 being top left, x2, y2 being bottom right, x3, y3 being bottom left, x4, y4 being top right. """XYXY="XYXY"XYWH="XYWH"CXCYWH="CXCYWH"XYWHR="XYWHR"CXCYWHR="CXCYWHR"XYXYXYXY="XYXYXYXY"
[docs]classBoundingBoxes(TVTensor):""":class:`torch.Tensor` subclass for bounding boxes with shape ``[N, K]``. Where ``N`` is the number of bounding boxes and ``K`` is 4 for unrotated boxes, and 5 or 8 for rotated boxes. .. note:: There should be only one :class:`~torchvision.tv_tensors.BoundingBoxes` instance per sample e.g. ``{"img": img, "bbox": BoundingBoxes(...)}``, although one :class:`~torchvision.tv_tensors.BoundingBoxes` object can contain multiple bounding boxes. Args: data: Any data that can be turned into a tensor with :func:`torch.as_tensor`. format (BoundingBoxFormat, str): Format of the bounding box. canvas_size (two-tuple of ints): Height and width of the corresponding image or video. dtype (torch.dtype, optional): Desired data type of the bounding box. If omitted, will be inferred from ``data``. device (torch.device, optional): Desired device of the bounding box. If omitted and ``data`` is a :class:`torch.Tensor`, the device is taken from it. Otherwise, the bounding box is constructed on the CPU. requires_grad (bool, optional): Whether autograd should record operations on the bounding box. If omitted and ``data`` is a :class:`torch.Tensor`, the value is taken from it. Otherwise, defaults to ``False``. """format:BoundingBoxFormatcanvas_size:Tuple[int,int]@classmethoddef_wrap(cls,tensor:torch.Tensor,*,format:Union[BoundingBoxFormat,str],canvas_size:Tuple[int,int],check_dims:bool=True)->BoundingBoxes:# type: ignore[override]ifcheck_dims:iftensor.ndim==1:tensor=tensor.unsqueeze(0)eliftensor.ndim!=2:raiseValueError(f"Expected a 1D or 2D tensor, got {tensor.ndim}D")ifisinstance(format,str):format=BoundingBoxFormat[format.upper()]bounding_boxes=tensor.as_subclass(cls)bounding_boxes.format=formatbounding_boxes.canvas_size=canvas_sizereturnbounding_boxesdef__new__(cls,data:Any,*,format:Union[BoundingBoxFormat,str],canvas_size:Tuple[int,int],dtype:Optional[torch.dtype]=None,device:Optional[Union[torch.device,str,int]]=None,requires_grad:Optional[bool]=None,)->BoundingBoxes:tensor=cls._to_tensor(data,dtype=dtype,device=device,requires_grad=requires_grad)returncls._wrap(tensor,format=format,canvas_size=canvas_size)@classmethoddef_wrap_output(cls,output:torch.Tensor,args:Sequence[Any]=(),kwargs:Optional[Mapping[str,Any]]=None,)->BoundingBoxes:# If there are BoundingBoxes instances in the output, their metadata got lost when we called# super().__torch_function__. We need to restore the metadata somehow, so we choose to take# the metadata from the first bbox in the parameters.# This should be what we want in most cases. When it's not, it's probably a mis-use anyway, e.g.# something like some_xyxy_bbox + some_xywh_bbox; we don't guard against those cases.flat_params,_=tree_flatten(args+(tuple(kwargs.values())ifkwargselse()))# type: ignore[operator]first_bbox_from_args=next(xforxinflat_paramsifisinstance(x,BoundingBoxes))format,canvas_size=first_bbox_from_args.format,first_bbox_from_args.canvas_sizeifisinstance(output,torch.Tensor)andnotisinstance(output,BoundingBoxes):output=BoundingBoxes._wrap(output,format=format,canvas_size=canvas_size,check_dims=False)elifisinstance(output,(tuple,list)):output=type(output)(BoundingBoxes._wrap(part,format=format,canvas_size=canvas_size,check_dims=False)forpartinoutput)returnoutputdef__repr__(self,*,tensor_contents:Any=None)->str:# type: ignore[override]returnself._make_repr(format=self.format,canvas_size=self.canvas_size)
Docs
Access comprehensive developer documentation for PyTorch
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.