- torchvision.models.detection.ssdlite320_mobilenet_v3_large(*, weights: Optional[SSDLite320_MobileNet_V3_Large_Weights] = None, progress: bool = True, num_classes: Optional[int] = None, weights_backbone: Optional[MobileNet_V3_Large_Weights] = MobileNet_V3_Large_Weights.IMAGENET1K_V1, trainable_backbone_layers: Optional[int] = None, norm_layer: Optional[Callable[[...], Module]] = None, **kwargs: Any) SSD ¶
SSDlite model architecture with input size 320x320 and a MobileNetV3 Large backbone, as described at Searching for MobileNetV3 and MobileNetV2: Inverted Residuals and Linear Bottlenecks.
The detection module is in Beta stage, and backward compatibility is not guaranteed.
ssd300_vgg16()for more details.
>>> model = torchvision.models.detection.ssdlite320_mobilenet_v3_large(weights=SSDLite320_MobileNet_V3_Large_Weights.DEFAULT) >>> model.eval() >>> x = [torch.rand(3, 320, 320), torch.rand(3, 500, 400)] >>> predictions = model(x)
SSDLite320_MobileNet_V3_Large_Weights, optional) – The pretrained weights to use. See
SSDLite320_MobileNet_V3_Large_Weightsbelow for more details, and possible values. By default, no pre-trained weights are used.
progress (bool, optional) – If True, displays a progress bar of the download to stderr. Default is True.
num_classes (int, optional) – number of output classes of the model (including the background).
MobileNet_V3_Large_Weights, optional) – The pretrained weights for the backbone.
trainable_backbone_layers (int, optional) – number of trainable (not frozen) layers starting from final block. Valid values are between 0 and 6, with 6 meaning all backbone layers are trainable. If
Noneis passed (the default) this value is set to 6.
norm_layer (callable, optional) – Module specifying the normalization layer to use.
**kwargs – parameters passed to the
torchvision.models.detection.ssd.SSDbase class. Please refer to the source code for more details about this class.
- class torchvision.models.detection.SSDLite320_MobileNet_V3_Large_Weights(value)¶
The model builder above accepts the following values as the
SSDLite320_MobileNet_V3_Large_Weights.DEFAULTis equivalent to
SSDLite320_MobileNet_V3_Large_Weights.COCO_V1. You can also use strings, e.g.
These weights were produced by following a similar training recipe as on the paper. Also available as
box_map (on COCO-val2017)
__background__, person, bicycle, … (88 omitted)
The inference transforms are available at
SSDLite320_MobileNet_V3_Large_Weights.COCO_V1.transformsand perform the following preprocessing operations: Accepts
(B, C, H, W)and single
(C, H, W)image
torch.Tensorobjects. The images are rescaled to