ssd300_vgg16¶
-
torchvision.models.detection.
ssd300_vgg16
(*, weights: Optional[torchvision.models.detection.ssd.SSD300_VGG16_Weights] = None, progress: bool = True, num_classes: Optional[int] = None, weights_backbone: Optional[torchvision.models.vgg.VGG16_Weights] = VGG16_Weights.IMAGENET1K_FEATURES, trainable_backbone_layers: Optional[int] = None, **kwargs: Any) → torchvision.models.detection.ssd.SSD[source]¶ The SSD300 model is based on the SSD: Single Shot MultiBox Detector paper.
Warning
The detection module is in Beta stage, and backward compatibility is not guaranteed.
The input to the model is expected to be a list of tensors, each of shape [C, H, W], one for each image, and should be in 0-1 range. Different images can have different sizes but they will be resized to a fixed size before passing it to the backbone.
The behavior of the model changes depending if it is in training or evaluation mode.
During training, the model expects both the input tensors, as well as a targets (list of dictionary), containing:
boxes (
FloatTensor[N, 4]
): the ground-truth boxes in[x1, y1, x2, y2]
format, with0 <= x1 < x2 <= W
and0 <= y1 < y2 <= H
.labels (Int64Tensor[N]): the class label for each ground-truth box
The model returns a Dict[Tensor] during training, containing the classification and regression losses.
During inference, the model requires only the input tensors, and returns the post-processed predictions as a List[Dict[Tensor]], one for each input image. The fields of the Dict are as follows, where
N
is the number of detections:boxes (
FloatTensor[N, 4]
): the predicted boxes in[x1, y1, x2, y2]
format, with0 <= x1 < x2 <= W
and0 <= y1 < y2 <= H
.labels (Int64Tensor[N]): the predicted labels for each detection
scores (Tensor[N]): the scores for each detection
Example
>>> model = torchvision.models.detection.ssd300_vgg16(weights=SSD300_VGG16_Weights.DEFAULT) >>> model.eval() >>> x = [torch.rand(3, 300, 300), torch.rand(3, 500, 400)] >>> predictions = model(x)
- Parameters
weights (
SSD300_VGG16_Weights
, optional) – The pretrained weights to use. SeeSSD300_VGG16_Weights
below for more details, and possible values. By default, no pre-trained weights are used.progress (bool, optional) – If True, displays a progress bar of the download to stderr Default is True.
num_classes (int, optional) – number of output classes of the model (including the background)
weights_backbone (
VGG16_Weights
, optional) – The pretrained weights for the backbonetrainable_backbone_layers (int, optional) – number of trainable (not frozen) layers starting from final block. Valid values are between 0 and 5, with 5 meaning all backbone layers are trainable. If
None
is passed (the default) this value is set to 4.**kwargs – parameters passed to the
torchvision.models.detection.SSD
base class. Please refer to the source code for more details about this class.
-
class
torchvision.models.detection.
SSD300_VGG16_Weights
(value)[source]¶ The model builder above accepts the following values as the
weights
parameter.SSD300_VGG16_Weights.DEFAULT
is equivalent toSSD300_VGG16_Weights.COCO_V1
. You can also use strings, e.g.weights='DEFAULT'
orweights='COCO_V1'
.SSD300_VGG16_Weights.COCO_V1:
These weights were produced by following a similar training recipe as on the paper. Also available as
SSD300_VGG16_Weights.DEFAULT
.box_map (on COCO-val2017)
25.1
num_params
35641826
categories
__background__, person, bicycle, … (88 omitted)
min_size
height=1, width=1
recipe
The inference transforms are available at
SSD300_VGG16_Weights.COCO_V1.transforms
and perform the following preprocessing operations: AcceptsPIL.Image
, batched(B, C, H, W)
and single(C, H, W)
imagetorch.Tensor
objects. The images are rescaled to[0.0, 1.0]
.