Shortcuts

mobilenet_v3_large

torchvision.models.quantization.mobilenet_v3_large(*, weights: Optional[Union[torchvision.models.quantization.mobilenetv3.MobileNet_V3_Large_QuantizedWeights, torchvision.models.mobilenetv3.MobileNet_V3_Large_Weights]] = None, progress: bool = True, quantize: bool = False, **kwargs: Any)torchvision.models.quantization.mobilenetv3.QuantizableMobileNetV3[source]

MobileNetV3 (Large) model from Searching for MobileNetV3.

Note

Note that quantize = True returns a quantized model with 8 bit weights. Quantized models only support inference and run on CPUs. GPU inference is not yet supported.

Parameters
  • weights (MobileNet_V3_Large_QuantizedWeights or MobileNet_V3_Large_Weights, optional) – The pretrained weights for the model. See MobileNet_V3_Large_QuantizedWeights below for more details, and possible values. By default, no pre-trained weights are used.

  • progress (bool) – If True, displays a progress bar of the download to stderr. Default is True.

  • quantize (bool) – If True, return a quantized version of the model. Default is False.

  • **kwargs – parameters passed to the torchvision.models.quantization.MobileNet_V3_Large_QuantizedWeights base class. Please refer to the source code for more details about this class.

class torchvision.models.quantization.MobileNet_V3_Large_QuantizedWeights(value)[source]

The model builder above accepts the following values as the weights parameter. MobileNet_V3_Large_QuantizedWeights.DEFAULT is equivalent to MobileNet_V3_Large_QuantizedWeights.IMAGENET1K_QNNPACK_V1. You can also use strings, e.g. weights='DEFAULT' or weights='IMAGENET1K_QNNPACK_V1'.

MobileNet_V3_Large_QuantizedWeights.IMAGENET1K_QNNPACK_V1:

These weights were produced by doing Quantization Aware Training (eager mode) on top of the unquantized weights listed below. Also available as MobileNet_V3_Large_QuantizedWeights.DEFAULT.

acc@1 (on ImageNet-1K)

73.004

acc@5 (on ImageNet-1K)

90.858

num_params

5483032

min_size

height=1, width=1

categories

tench, goldfish, great white shark, … (997 omitted)

backend

qnnpack

recipe

link

unquantized

MobileNet_V3_Large_Weights.IMAGENET1K_V1

The inference transforms are available at MobileNet_V3_Large_QuantizedWeights.IMAGENET1K_QNNPACK_V1.transforms and perform the following preprocessing operations: Accepts PIL.Image, batched (B, C, H, W) and single (C, H, W) image torch.Tensor objects. The images are resized to resize_size=[256] using interpolation=InterpolationMode.BILINEAR, followed by a central crop of crop_size=[224]. Finally the values are first rescaled to [0.0, 1.0] and then normalized using mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225].

class torchvision.models.MobileNet_V3_Large_Weights(value)[source]

The model builder above accepts the following values as the weights parameter. MobileNet_V3_Large_Weights.DEFAULT is equivalent to MobileNet_V3_Large_Weights.IMAGENET1K_V2. You can also use strings, e.g. weights='DEFAULT' or weights='IMAGENET1K_V1'.

MobileNet_V3_Large_Weights.IMAGENET1K_V1:

These weights were trained from scratch by using a simple training recipe.

acc@1 (on ImageNet-1K)

74.042

acc@5 (on ImageNet-1K)

91.34

min_size

height=1, width=1

categories

tench, goldfish, great white shark, … (997 omitted)

num_params

5483032

recipe

link

The inference transforms are available at MobileNet_V3_Large_Weights.IMAGENET1K_V1.transforms and perform the following preprocessing operations: Accepts PIL.Image, batched (B, C, H, W) and single (C, H, W) image torch.Tensor objects. The images are resized to resize_size=[256] using interpolation=InterpolationMode.BILINEAR, followed by a central crop of crop_size=[224]. Finally the values are first rescaled to [0.0, 1.0] and then normalized using mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225].

MobileNet_V3_Large_Weights.IMAGENET1K_V2:

These weights improve marginally upon the results of the original paper by using a modified version of TorchVision’s new training recipe. Also available as MobileNet_V3_Large_Weights.DEFAULT.

acc@1 (on ImageNet-1K)

75.274

acc@5 (on ImageNet-1K)

92.566

min_size

height=1, width=1

categories

tench, goldfish, great white shark, … (997 omitted)

num_params

5483032

recipe

link

The inference transforms are available at MobileNet_V3_Large_Weights.IMAGENET1K_V2.transforms and perform the following preprocessing operations: Accepts PIL.Image, batched (B, C, H, W) and single (C, H, W) image torch.Tensor objects. The images are resized to resize_size=[232] using interpolation=InterpolationMode.BILINEAR, followed by a central crop of crop_size=[224]. Finally the values are first rescaled to [0.0, 1.0] and then normalized using mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225].

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources