Shortcuts

vgg16

torchvision.models.vgg16(*, weights: Optional[torchvision.models.vgg.VGG16_Weights] = None, progress: bool = True, **kwargs: Any)torchvision.models.vgg.VGG[source]

VGG-16 from Very Deep Convolutional Networks for Large-Scale Image Recognition.

Parameters
  • weights (VGG16_Weights, optional) – The pretrained weights to use. See VGG16_Weights below for more details, and possible values. By default, no pre-trained weights are used.

  • progress (bool, optional) – If True, displays a progress bar of the download to stderr. Default is True.

  • **kwargs – parameters passed to the torchvision.models.vgg.VGG base class. Please refer to the source code for more details about this class.

class torchvision.models.VGG16_Weights(value)[source]

The model builder above accepts the following values as the weights parameter. VGG16_Weights.DEFAULT is equivalent to VGG16_Weights.IMAGENET1K_V1. You can also use strings, e.g. weights='DEFAULT' or weights='IMAGENET1K_V1'.

VGG16_Weights.IMAGENET1K_V1:

These weights were trained from scratch by using a simplified training recipe. Also available as VGG16_Weights.DEFAULT.

acc@1 (on ImageNet-1K)

71.592

acc@5 (on ImageNet-1K)

90.382

min_size

height=32, width=32

categories

tench, goldfish, great white shark, … (997 omitted)

recipe

link

num_params

138357544

The inference transforms are available at VGG16_Weights.IMAGENET1K_V1.transforms and perform the following preprocessing operations: Accepts PIL.Image, batched (B, C, H, W) and single (C, H, W) image torch.Tensor objects. The images are resized to resize_size=[256] using interpolation=InterpolationMode.BILINEAR, followed by a central crop of crop_size=[224]. Finally the values are first rescaled to [0.0, 1.0] and then normalized using mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225].

VGG16_Weights.IMAGENET1K_FEATURES:

These weights can’t be used for classification because they are missing values in the classifier module. Only the features module has valid values and can be used for feature extraction. The weights were trained using the original input standardization method as described in the paper.

acc@1 (on ImageNet-1K)

nan

acc@5 (on ImageNet-1K)

nan

min_size

height=32, width=32

categories

None

recipe

link

num_params

138357544

The inference transforms are available at VGG16_Weights.IMAGENET1K_FEATURES.transforms and perform the following preprocessing operations: Accepts PIL.Image, batched (B, C, H, W) and single (C, H, W) image torch.Tensor objects. The images are resized to resize_size=[256] using interpolation=InterpolationMode.BILINEAR, followed by a central crop of crop_size=[224]. Finally the values are first rescaled to [0.0, 1.0] and then normalized using mean=[0.48235, 0.45882, 0.40784] and std=[0.00392156862745098, 0.00392156862745098, 0.00392156862745098].

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources