class torchvision.datasets.Kinetics(root: str, frames_per_clip: int, num_classes: str = '400', split: str = 'train', frame_rate: Optional[int] = None, step_between_clips: int = 1, transform: Optional[Callable] = None, extensions: Tuple[str, ...] = ('avi', 'mp4'), download: bool = False, num_download_workers: int = 1, num_workers: int = 1, _precomputed_metadata: Optional[Dict[str, Any]] = None, _video_width: int = 0, _video_height: int = 0, _video_min_dimension: int = 0, _audio_samples: int = 0, _audio_channels: int = 0, _legacy: bool = False, output_format: str = 'TCHW')[source]

Generic Kinetics dataset.

Kinetics-400/600/700 are action recognition video datasets. This dataset consider every video as a collection of video clips of fixed size, specified by frames_per_clip, where the step in frames between each clip is given by step_between_clips.

To give an example, for 2 videos with 10 and 15 frames respectively, if frames_per_clip=5 and step_between_clips=5, the dataset size will be (2 + 3) = 5, where the first two elements will come from video 1, and the next three elements from video 2. Note that we drop clips which do not have exactly frames_per_clip elements, so not all frames in a video might be present.

  • root (string) –

    Root directory of the Kinetics Dataset. Directory should be structured as follows: .. code:

    ├── split
    │   ├──  class1
    │   │   ├──  clip1.mp4
    │   │   ├──  clip2.mp4
    │   │   ├──  clip3.mp4
    │   │   ├──  ...
    │   ├──  class2
    │   │   ├──   clipx.mp4
    │   │    └── ...

    Note: split is appended automatically using the split argument.

  • frames_per_clip (int) – number of frames in a clip

  • num_classes (int) – select between Kinetics-400 (default), Kinetics-600, and Kinetics-700

  • split (str) – split of the dataset to consider; supports "train" (default) "val" "test"

  • frame_rate (float) – If omitted, interpolate different frame rate for each clip.

  • step_between_clips (int) – number of frames between each clip

  • transform (callable, optional) – A function/transform that takes in a TxHxWxC video and returns a transformed version.

  • download (bool) – Download the official version of the dataset to root folder.

  • num_workers (int) – Use multiple workers for VideoClips creation

  • num_download_workers (int) – Use multiprocessing in order to speed up download.

  • output_format (str, optional) – The format of the output video tensors (before transforms). Can be either “THWC” or “TCHW” (default). Note that in most other utils and datasets, the default is actually “THWC”.


A 3-tuple with the following entries:

  • video (Tensor[T, C, H, W] or Tensor[T, H, W, C]): the T video frames in torch.uint8 tensor

  • audio(Tensor[K, L]): the audio frames, where K is the number of channels and L is the number of points in torch.float tensor

  • label (int): class of the video clip

Return type:



RuntimeError – If download is True and the video archives are already extracted.


__getitem__(idx: int) Tuple[Tensor, Tensor, int][source]

index (int) – Index


Sample and meta data, optionally transformed by the respective transforms.

Return type:



Access comprehensive developer documentation for PyTorch

View Docs


Get in-depth tutorials for beginners and advanced developers

View Tutorials


Find development resources and get your questions answered

View Resources