- torchvision.models.vit_l_32(*, weights: Optional[ViT_L_32_Weights] = None, progress: bool = True, **kwargs: Any) VisionTransformer [source]¶
Constructs a vit_l_32 architecture from An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.
ViT_L_32_Weights, optional) – The pretrained weights to use. See
ViT_L_32_Weightsbelow for more details and possible values. By default, no pre-trained weights are used.
progress (bool, optional) – If True, displays a progress bar of the download to stderr. Default is True.
**kwargs – parameters passed to the
torchvision.models.vision_transformer.VisionTransformerbase class. Please refer to the source code for more details about this class.
- class torchvision.models.ViT_L_32_Weights(value)[source]¶
The model builder above accepts the following values as the
ViT_L_32_Weights.DEFAULTis equivalent to
ViT_L_32_Weights.IMAGENET1K_V1. You can also use strings, e.g.
These weights were trained from scratch by using a modified version of DeIT’s training recipe. Also available as
acc@1 (on ImageNet-1K)
acc@5 (on ImageNet-1K)
tench, goldfish, great white shark, … (997 omitted)
The inference transforms are available at
ViT_L_32_Weights.IMAGENET1K_V1.transformsand perform the following preprocessing operations: Accepts
(B, C, H, W)and single
(C, H, W)image
torch.Tensorobjects. The images are resized to
interpolation=InterpolationMode.BILINEAR, followed by a central crop of
crop_size=. Finally the values are first rescaled to
[0.0, 1.0]and then normalized using
mean=[0.485, 0.456, 0.406]and
std=[0.229, 0.224, 0.225].