- torchvision.models.video.mvit_v2_s(*, weights: Optional[MViT_V2_S_Weights] = None, progress: bool = True, **kwargs: Any) MViT [source]¶
Constructs a small MViTV2 architecture from Multiscale Vision Transformers.
The video module is in Beta stage, and backward compatibility is not guaranteed.
MViT_V2_S_Weights, optional) – The pretrained weights to use. See
MViT_V2_S_Weightsbelow for more details, and possible values. By default, no pre-trained weights are used.
progress (bool, optional) – If True, displays a progress bar of the download to stderr. Default is True.
**kwargs – parameters passed to the
torchvision.models.video.MViTbase class. Please refer to the source code for more details about this class.
- class torchvision.models.video.MViT_V2_S_Weights(value)[source]¶
The model builder above accepts the following values as the
MViT_V2_S_Weights.DEFAULTis equivalent to
MViT_V2_S_Weights.KINETICS400_V1. You can also use strings, e.g.
The weights were ported from the paper. The accuracies are estimated on video-level with parameters frame_rate=7.5, clips_per_video=5, and clip_len=16 Also available as
acc@1 (on Kinetics-400)
acc@5 (on Kinetics-400)
abseiling, air drumming, answering questions, … (397 omitted)
The inference transforms are available at
MViT_V2_S_Weights.KINETICS400_V1.transformsand perform the following preprocessing operations: Accepts batched
(B, T, C, H, W)and single
(T, C, H, W)video frame
torch.Tensorobjects. The frames are resized to
interpolation=InterpolationMode.BILINEAR, followed by a central crop of
crop_size=[224, 224]. Finally the values are first rescaled to
[0.0, 1.0]and then normalized using
mean=[0.45, 0.45, 0.45]and
std=[0.225, 0.225, 0.225]. Finally the output dimensions are permuted to
(..., C, T, H, W)tensors.