# torchvision.io¶

The torchvision.io package provides functions for performing IO operations. They are currently specific to reading and writing video and images.

## Video¶

torchvision.io.read_video(filename: str, start_pts: int = 0, end_pts: Optional[float] = None, pts_unit: str = 'pts') → Tuple[torch.Tensor, torch.Tensor, Dict[str, Any]][source]

Reads a video from a file, returning both the video frames as well as the audio frames

Parameters
• filename (str) – path to the video file

• start_pts (int if pts_unit = 'pts', float / Fraction if pts_unit = 'sec', optional) – The start presentation time of the video

• end_pts (int if pts_unit = 'pts', float / Fraction if pts_unit = 'sec', optional) – The end presentation time

• pts_unit (str, optional) – unit in which start_pts and end_pts values will be interpreted, either ‘pts’ or ‘sec’. Defaults to ‘pts’.

Returns

the T video frames aframes (Tensor[K, L]): the audio frames, where K is the number of channels and L is the number of points info (Dict): metadata for the video and audio. Can contain the fields video_fps (float) and audio_fps (int)

Return type

vframes (Tensor[T, H, W, C])

torchvision.io.read_video_timestamps(filename: str, pts_unit: str = 'pts') → Tuple[List[int], Optional[float]][source]

List the video frames timestamps.

Note that the function decodes the whole video frame-by-frame.

Parameters
• filename (str) – path to the video file

• pts_unit (str, optional) – unit in which timestamp values will be returned either ‘pts’ or ‘sec’. Defaults to ‘pts’.

Returns

presentation timestamps for each one of the frames in the video. video_fps (float, optional): the frame rate for the video

Return type

pts (List[int] if pts_unit = ‘pts’, List[Fraction] if pts_unit = ‘sec’)

torchvision.io.write_video(filename: str, video_array: torch.Tensor, fps: float, video_codec: str = 'libx264', options: Optional[Dict[str, Any]] = None, audio_array: Optional[torch.Tensor] = None, audio_fps: Optional[float] = None, audio_codec: Optional[str] = None, audio_options: Optional[Dict[str, Any]] = None) → None[source]

Writes a 4d tensor in [T, H, W, C] format in a video file

Parameters
• filename (str) – path where the video will be saved

• video_array (Tensor[T, H, W, C]) – tensor containing the individual frames, as a uint8 tensor in [T, H, W, C] format

• fps (Number) – video frames per second

• video_codec (str) – the name of the video codec, i.e. “libx264”, “h264”, etc.

• options (Dict) – dictionary containing options to be passed into the PyAV video stream

• audio_array (Tensor[C, N]) – tensor containing the audio, where C is the number of channels and N is the number of samples

• audio_fps (Number) – audio sample rate, typically 44100 or 48000

• audio_codec (str) – the name of the audio codec, i.e. “mp3”, “aac”, etc.

• audio_options (Dict) – dictionary containing options to be passed into the PyAV audio stream

## Fine-grained video API¶

In addition to the read_video function, we provide a high-performance lower-level API for more fine-grained control compared to the read_video function. It does all this whilst fully supporting torchscript.

class torchvision.io.VideoReader(path, stream='video')[source]

Fine-grained video-reading API. Supports frame-by-frame reading of various streams from a single video container.

Example

The following examples creates a VideoReader object, seeks into 2s point, and returns a single frame:

import torchvision
video_path = "path_to_a_test_video"


VideoReader implements the iterable API, which makes it suitable to using it in conjunction with itertools for more advanced reading. As such, we can use a VideoReader instance inside for loops:

reader.seek(2)
frames.append(frame['data'])
# additionally, seek implements a fluent API, so we can do
frames.append(frame['data'])


With itertools, we can read all frames between 2 and 5 seconds with the following code:

for frame in itertools.takewhile(lambda x: x['pts'] <= 5, reader.seek(2)):
frames.append(frame['data'])


and similarly, reading 10 frames after the 2s timestamp can be achieved as follows:

for frame in itertools.islice(reader.seek(2), 10):
frames.append(frame['data'])


Note

Each stream descriptor consists of two parts: stream type (e.g. ‘video’) and a unique stream id (which are determined by the video encoding). In this way, if the video contaner contains multiple streams of the same type, users can acces the one they want. If only stream type is passed, the decoder auto-detects first stream of that type.

Parameters
• path (string) – Path to the video file in supported format

• stream (string, optional) – descriptor of the required stream, followed by the stream id, in the format {stream_type}:{stream_id}. Defaults to "video:0". Currently available options include ['video', 'audio']

__next__()[source]

Decodes and returns the next frame of the current stream. Frames are encoded as a dict with mandatory data and pts fields, where data is a tensor, and pts is a presentation timestamp of the frame expressed in seconds as a float.

Returns

a dictionary and containing decoded frame (data) and corresponding timestamp (pts) in seconds

Return type

(dict)

get_metadata()[source]

Returns

dictionary containing duration and frame rate for every stream

Return type

(dict)

seek(time_s: float)[source]

Seek within current stream.

Parameters

time_s (float) – seek time in seconds

Note

Current implementation is the so-called precise seek. This means following seek, call to next() will return the frame with the exact timestamp if it exists or the first frame with timestamp larger than time_s.

set_current_stream(stream: str)[source]

Set current stream. Explicitly define the stream we are operating on.

Parameters

stream (string) – descriptor of the required stream. Defaults to "video:0" Currently available stream types include ['video', 'audio']. Each descriptor consists of two parts: stream type (e.g. ‘video’) and a unique stream id (which are determined by video encoding). In this way, if the video contaner contains multiple streams of the same type, users can acces the one they want. If only stream type is passed, the decoder auto-detects first stream of that type and returns it.

Returns

True on succes, False otherwise

Return type

(bool)

Example of inspecting a video:

import torchvision
video_path = "path to a test video"
# Constructor allocates memory and a threaded decoder
# instance per video. At the moment it takes two arguments:
# path to the video file, and a wanted stream.

# The information about the video can be retrieved using the
# get_metadata() method. It returns a dictionary for every stream, with
# duration and other relevant metadata (often frame rate)

# metadata is structured as a dict of dicts with following structure
# {"stream_type": {"attribute": [attribute per stream]}}
#
# following would print out the list of frame rates for every present video stream

# we explicitly select the stream we would like to operate on. In
# the constructor we select a default video stream, but
# in practice, we can set whichever stream we would like
video.set_current_stream("video:0")


## Image¶

class torchvision.io.ImageReadMode[source]

Support for various modes while reading images.

Use ImageReadMode.UNCHANGED for loading the image as-is, ImageReadMode.GRAY for converting to grayscale, ImageReadMode.GRAY_ALPHA for grayscale with transparency, ImageReadMode.RGB for RGB and ImageReadMode.RGB_ALPHA for RGB with transparency.

torchvision.io.read_image(path: str, mode: torchvision.io.image.ImageReadMode = <ImageReadMode.UNCHANGED: 0>) → torch.Tensor[source]

Reads a JPEG or PNG image into a 3 dimensional RGB Tensor. Optionally converts the image to the desired format. The values of the output tensor are uint8 between 0 and 255.

Parameters
• path (str) – path of the JPEG or PNG image.

• mode (ImageReadMode) – the read mode used for optionally converting the image. Default: ImageReadMode.UNCHANGED. See ImageReadMode class for more information on various available modes.

Returns

output (Tensor[image_channels, image_height, image_width])

Examples using read_image:

torchvision.io.decode_image(input: torch.Tensor, mode: torchvision.io.image.ImageReadMode = <ImageReadMode.UNCHANGED: 0>) → torch.Tensor[source]

Detects whether an image is a JPEG or PNG and performs the appropriate operation to decode the image into a 3 dimensional RGB Tensor.

Optionally converts the image to the desired format. The values of the output tensor are uint8 between 0 and 255.

Parameters
• input (Tensor) – a one dimensional uint8 tensor containing the raw bytes of the PNG or JPEG image.

• mode (ImageReadMode) – the read mode used for optionally converting the image. Default: ImageReadMode.UNCHANGED. See ImageReadMode class for more information on various available modes.

Returns

output (Tensor[image_channels, image_height, image_width])

torchvision.io.encode_jpeg(input: torch.Tensor, quality: int = 75) → torch.Tensor[source]

Takes an input tensor in CHW layout and returns a buffer with the contents of its corresponding JPEG file.

Parameters
• input (Tensor[channels, image_height, image_width])) – int8 image tensor of c channels, where c must be 1 or 3.

• quality (int) – Quality of the resulting JPEG file, it must be a number between 1 and 100. Default: 75

Returns

A one dimensional int8 tensor that contains the raw bytes of the

JPEG file.

Return type

output (Tensor[1])

torchvision.io.decode_jpeg(input: torch.Tensor, mode: torchvision.io.image.ImageReadMode = <ImageReadMode.UNCHANGED: 0>, device: str = 'cpu') → torch.Tensor[source]

Decodes a JPEG image into a 3 dimensional RGB Tensor. Optionally converts the image to the desired format. The values of the output tensor are uint8 between 0 and 255.

Parameters
• input (Tensor[1]) – a one dimensional uint8 tensor containing the raw bytes of the JPEG image. This tensor must be on CPU, regardless of the device parameter.

• mode (ImageReadMode) – the read mode used for optionally converting the image. Default: ImageReadMode.UNCHANGED. See ImageReadMode class for more information on various available modes.

• device (str or torch.device) – The device on which the decoded image will be stored. If a cuda device is specified, the image will be decoded with nvjpeg. This is only supported for CUDA version >= 10.1

Returns

output (Tensor[image_channels, image_height, image_width])

torchvision.io.write_jpeg(input: torch.Tensor, filename: str, quality: int = 75)[source]

Takes an input tensor in CHW layout and saves it in a JPEG file.

Parameters
• input (Tensor[channels, image_height, image_width]) – int8 image tensor of c channels, where c must be 1 or 3.

• filename (str) – Path to save the image.

• quality (int) – Quality of the resulting JPEG file, it must be a number between 1 and 100. Default: 75

torchvision.io.encode_png(input: torch.Tensor, compression_level: int = 6) → torch.Tensor[source]

Takes an input tensor in CHW layout and returns a buffer with the contents of its corresponding PNG file.

Parameters
• input (Tensor[channels, image_height, image_width]) – int8 image tensor of c channels, where c must 3 or 1.

• compression_level (int) – Compression factor for the resulting file, it must be a number between 0 and 9. Default: 6

Returns

A one dimensional int8 tensor that contains the raw bytes of the

PNG file.

Return type

Tensor[1]

torchvision.io.decode_png(input: torch.Tensor, mode: torchvision.io.image.ImageReadMode = <ImageReadMode.UNCHANGED: 0>) → torch.Tensor[source]

Decodes a PNG image into a 3 dimensional RGB Tensor. Optionally converts the image to the desired format. The values of the output tensor are uint8 between 0 and 255.

Parameters
• input (Tensor[1]) – a one dimensional uint8 tensor containing the raw bytes of the PNG image.

• mode (ImageReadMode) – the read mode used for optionally converting the image. Default: ImageReadMode.UNCHANGED. See ImageReadMode class for more information on various available modes.

Returns

output (Tensor[image_channels, image_height, image_width])

torchvision.io.write_png(input: torch.Tensor, filename: str, compression_level: int = 6)[source]

Takes an input tensor in CHW layout (or HW in the case of grayscale images) and saves it in a PNG file.

Parameters
• input (Tensor[channels, image_height, image_width]) – int8 image tensor of c channels, where c must be 1 or 3.

• filename (str) – Path to save the image.

• compression_level (int) – Compression factor for the resulting file, it must be a number between 0 and 9. Default: 6

torchvision.io.read_file(path: str) → torch.Tensor[source]

Reads and outputs the bytes contents of a file as a uint8 Tensor with one dimension.

Parameters

path (str) – the path to the file to be read

Returns

data (Tensor)

torchvision.io.write_file(filename: str, data: torch.Tensor) → None[source]

Writes the contents of a uint8 tensor with one dimension to a file.

Parameters
• filename (str) – the path to the file to be written

• data (Tensor) – the contents to be written to the output file