Shortcuts

StreamingMediaEncoder

class torio.io.StreamingMediaEncoder(dst: Union[str, Path, BinaryIO], format: Optional[str] = None, buffer_size: int = 4096)[source]

Encode and write audio/video streams chunk by chunk

Parameters:
  • dst (str, path-like or file-like object) –

    The destination where the encoded data are written. If string-type, it must be a resource indicator that FFmpeg can handle. The supported value depends on the FFmpeg found in the system.

    If file-like object, it must support write method with the signature write(data: bytes) -> int.

    Please refer to the following for the expected signature and behavior of write method.

  • format (str or None, optional) –

    Override the output format, or specify the output media device. Default: None (no override nor device output).

    This argument serves two different use cases.

    1. Override the output format. This is useful when writing raw data or in a format different from the extension.

    2. Specify the output device. This allows to output media streams to hardware devices, such as speaker and video screen.

    Note

    This option roughly corresponds to -f option of ffmpeg command. Please refer to the ffmpeg documentations for possible values.

    https://ffmpeg.org/ffmpeg-formats.html#Muxers

    Please use get_muxers() to list the multiplexers available in the current environment.

    For device access, the available values vary based on hardware (AV device) and software configuration (ffmpeg build). Please refer to the ffmpeg documentations for possible values.

    https://ffmpeg.org/ffmpeg-devices.html#Output-Devices

    Please use get_output_devices() to list the output devices available in the current environment.

  • buffer_size (int) –

    The internal buffer size in byte. Used only when dst is a file-like object.

    Default: 4096.

Methods

add_audio_stream

StreamingMediaEncoder.add_audio_stream(sample_rate: int, num_channels: int, format: str = 'flt', *, encoder: Optional[str] = None, encoder_option: Optional[Dict[str, str]] = None, encoder_sample_rate: Optional[int] = None, encoder_num_channels: Optional[int] = None, encoder_format: Optional[str] = None, codec_config: Optional[CodecConfig] = None, filter_desc: Optional[str] = None)[source]

Add an output audio stream.

Parameters:
  • sample_rate (int) – The sample rate.

  • num_channels (int) – The number of channels.

  • format (str, optional) –

    Input sample format, which determines the dtype of the input tensor.

    • "u8": The input tensor must be torch.uint8 type.

    • "s16": The input tensor must be torch.int16 type.

    • "s32": The input tensor must be torch.int32 type.

    • "s64": The input tensor must be torch.int64 type.

    • "flt": The input tensor must be torch.float32 type.

    • "dbl": The input tensor must be torch.float64 type.

    Default: "flt".

  • encoder (str or None, optional) –

    The name of the encoder to be used. When provided, use the specified encoder instead of the default one.

    To list the available encoders, please use get_audio_encoders() for audio, and get_video_encoders() for video.

    Default: None.

  • encoder_option (dict or None, optional) –

    Options passed to encoder. Mapping from str to str.

    To list encoder options for a encoder, you can use ffmpeg -h encoder=<ENCODER> command.

    Default: None.


    In addition to encoder-specific options, you can also pass options related to multithreading. They are effective only if the encoder support them. If neither of them are provided, StreamReader defaults to single thread.

    "threads": The number of threads (in str). Providing the value "0" will let FFmpeg decides based on its heuristics.

    "thread_type": Which multithreading method to use. The valid values are "frame" or "slice". Note that each encoder supports different set of methods. If not provided, a default value is used.

    • "frame": Encode more than one frame at once. Each thread handles one frame. This will increase decoding delay by one frame per thread

    • "slice": Encode more than one part of a single frame at once.


  • encoder_sample_rate (int or None, optional) –

    Override the sample rate used for encoding time. Some encoders pose restriction on the sample rate used for encoding. If the source sample rate is not supported by the encoder, the source sample rate is used, otherwise a default one is picked.

    For example, "opus" encoder only supports 48k Hz, so, when encoding a waveform with "opus" encoder, it is always encoded as 48k Hz. Meanwhile "mp3" ("libmp3lame") supports 44.1k, 48k, 32k, 22.05k, 24k, 16k, 11.025k, 12k and 8k Hz. If the original sample rate is one of these, then the original sample rate is used, otherwise it will be resampled to a default one (44.1k). When encoding into WAV format, there is no restriction on sample rate, so the original sample rate will be used.

    Providing encoder_sample_rate will override this behavior and make encoder attempt to use the provided sample rate. The provided value must be one support by the encoder.

  • encoder_num_channels (int or None, optional) –

    Override the number of channels used for encoding.

    Similar to sample rate, some encoders (such as "opus", "vorbis" and "g722") pose restriction on the numbe of channels that can be used for encoding.

    If the original number of channels is supported by encoder, then it will be used, otherwise, the encoder attempts to remix the channel to one of the supported ones.

    Providing encoder_num_channels will override this behavior and make encoder attempt to use the provided number of channels. The provided value must be one support by the encoder.

  • encoder_format (str or None, optional) –

    Format used to encode media. When encoder supports multiple formats, passing this argument will override the format used for encoding.

    To list supported formats for the encoder, you can use ffmpeg -h encoder=<ENCODER> command.

    Default: None.

    Note

    When encoder_format option is not provided, encoder uses its default format.

    For example, when encoding audio into wav format, 16-bit signed integer is used, and when encoding video into mp4 format (h264 encoder), one of YUV format is used.

    This is because typically, 32-bit or 16-bit floating point is used in audio models but they are not commonly used in audio formats. Similarly, RGB24 is commonly used in vision models, but video formats usually (and better) support YUV formats.

  • codec_config (CodecConfig or None, optional) –

    Codec configuration. Please refer to CodecConfig for configuration options.

    Default: None.

  • filter_desc (str or None, optional) – Additional processing to apply before encoding the input media.

add_video_stream

StreamingMediaEncoder.add_video_stream(frame_rate: float, width: int, height: int, format: str = 'rgb24', *, encoder: Optional[str] = None, encoder_option: Optional[Dict[str, str]] = None, encoder_frame_rate: Optional[float] = None, encoder_width: Optional[int] = None, encoder_height: Optional[int] = None, encoder_format: Optional[str] = None, codec_config: Optional[CodecConfig] = None, filter_desc: Optional[str] = None, hw_accel: Optional[str] = None)[source]

Add an output video stream.

This method has to be called before open is called.

Parameters:
  • frame_rate (float) – Frame rate of the video.

  • width (int) – Width of the video frame.

  • height (int) – Height of the video frame.

  • format (str, optional) –

    Input pixel format, which determines the color channel order of the input tensor.

    • "gray8": One channel, grayscale.

    • "rgb24": Three channels in the order of RGB.

    • "bgr24": Three channels in the order of BGR.

    • "yuv444p": Three channels in the order of YUV.

    Default: "rgb24".

    In either case, the input tensor has to be torch.uint8 type and the shape must be (frame, channel, height, width).

  • encoder (str or None, optional) –

    The name of the encoder to be used. When provided, use the specified encoder instead of the default one.

    To list the available encoders, please use get_audio_encoders() for audio, and get_video_encoders() for video.

    Default: None.

  • encoder_option (dict or None, optional) –

    Options passed to encoder. Mapping from str to str.

    To list encoder options for a encoder, you can use ffmpeg -h encoder=<ENCODER> command.

    Default: None.


    In addition to encoder-specific options, you can also pass options related to multithreading. They are effective only if the encoder support them. If neither of them are provided, StreamReader defaults to single thread.

    "threads": The number of threads (in str). Providing the value "0" will let FFmpeg decides based on its heuristics.

    "thread_type": Which multithreading method to use. The valid values are "frame" or "slice". Note that each encoder supports different set of methods. If not provided, a default value is used.

    • "frame": Encode more than one frame at once. Each thread handles one frame. This will increase decoding delay by one frame per thread

    • "slice": Encode more than one part of a single frame at once.


  • encoder_frame_rate (float or None, optional) –

    Override the frame rate used for encoding.

    Some encoders, (such as "mpeg1" and "mpeg2") pose restriction on the frame rate that can be used for encoding. If such case, if the source frame rate (provided as frame_rate) is not one of the supported frame rate, then a default one is picked, and the frame rate is changed on-the-fly. Otherwise the source frame rate is used.

    Providing encoder_frame_rate will override this behavior and make encoder attempts to use the provided sample rate. The provided value must be one support by the encoder.

  • encoder_width (int or None, optional) – Width of the image used for encoding. This allows to change the image size during encoding.

  • encoder_height (int or None, optional) – Height of the image used for encoding. This allows to change the image size during encoding.

  • encoder_format (str or None, optional) –

    Format used to encode media. When encoder supports multiple formats, passing this argument will override the format used for encoding.

    To list supported formats for the encoder, you can use ffmpeg -h encoder=<ENCODER> command.

    Default: None.

    Note

    When encoder_format option is not provided, encoder uses its default format.

    For example, when encoding audio into wav format, 16-bit signed integer is used, and when encoding video into mp4 format (h264 encoder), one of YUV format is used.

    This is because typically, 32-bit or 16-bit floating point is used in audio models but they are not commonly used in audio formats. Similarly, RGB24 is commonly used in vision models, but video formats usually (and better) support YUV formats.

  • codec_config (CodecConfig or None, optional) –

    Codec configuration. Please refer to CodecConfig for configuration options.

    Default: None.

  • filter_desc (str or None, optional) – Additional processing to apply before encoding the input media.

  • hw_accel (str or None, optional) –

    Enable hardware acceleration.

    When video is encoded on CUDA hardware, for example encoder=”h264_nvenc”, passing CUDA device indicator to hw_accel (i.e. hw_accel=”cuda:0”) will make StreamingMediaEncoder expect video chunk to be CUDA Tensor. Passing CPU Tensor will result in an error.

    If None, the video chunk Tensor has to be CPU Tensor. Default: None.

close

StreamingMediaEncoder.close()[source]

Close the output

StreamingMediaEncoder is also a context manager and therefore supports the with statement. It is recommended to use context manager, as the file is closed automatically when exiting from with clause.

See StreamingMediaEncoder.open() for more detail.

flush

StreamingMediaEncoder.flush()[source]

Flush the frames from encoders and write the frames to the destination.

open

StreamingMediaEncoder.open(option: Optional[Dict[str, str]] = None) StreamingMediaEncoder[source]

Open the output file / device and write the header.

StreamingMediaEncoder is also a context manager and therefore supports the with statement. This method returns the instance on which the method is called (i.e. self), so that it can be used in with statement. It is recommended to use context manager, as the file is closed automatically when exiting from with clause.

Parameters:

option (dict or None, optional) – Private options for protocol, device and muxer. See example.

Example - Protocol option
>>> s = StreamingMediaEncoder(dst="rtmp://localhost:1234/live/app", format="flv")
>>> s.add_video_stream(...)
>>> # Passing protocol option `listen=1` makes StreamingMediaEncoder act as RTMP server.
>>> with s.open(option={"listen": "1"}) as f:
>>>     f.write_video_chunk(...)
Example - Device option
>>> s = StreamingMediaEncoder("-", format="sdl")
>>> s.add_video_stream(..., encoder_format="rgb24")
>>> # Open SDL video player with fullscreen
>>> with s.open(option={"window_fullscreen": "1"}):
>>>     f.write_video_chunk(...)
Example - Muxer option
>>> s = StreamingMediaEncoder("foo.flac")
>>> s.add_audio_stream(...)
>>> s.set_metadata({"artist": "torio contributors"})
>>> # FLAC muxer has a private option to not write the header.
>>> # The resulting file does not contain the above metadata.
>>> with s.open(option={"write_header": "false"}) as f:
>>>     f.write_audio_chunk(...)

set_metadata

StreamingMediaEncoder.set_metadata(metadata: Dict[str, str])[source]

Set file-level metadata

Parameters:

metadata (dict or None, optional) – File-level metadata.

write_audio_chunk

StreamingMediaEncoder.write_audio_chunk(i: int, chunk: Tensor, pts: Optional[float] = None)[source]

Write audio data

Parameters:
  • i (int) – Stream index.

  • chunk (Tensor) – Waveform tensor. Shape: (frame, channel). The dtype must match what was passed to add_audio_stream() method.

  • pts (float, optional, or None) –

    If provided, overwrite the presentation timestamp.

    Note

    The provided value is converted to integer value expressed in basis of sample rate. Therefore, it is truncated to the nearest value of n / sample_rate.

write_video_chunk

StreamingMediaEncoder.write_video_chunk(i: int, chunk: Tensor, pts: Optional[float] = None)[source]

Write video/image data

Parameters:
  • i (int) – Stream index.

  • chunk (Tensor) – Video/image tensor. Shape: (time, channel, height, width). The dtype must be torch.uint8. The shape (height, width and the number of channels) must match what was configured when calling add_video_stream()

  • pts (float, optional or None) –

    If provided, overwrite the presentation timestamp.

    Note

    The provided value is converted to integer value expressed in basis of frame rate. Therefore, it is truncated to the nearest value of n / frame_rate.

Support Structures

CodecConfig

class torio.io.CodecConfig(bit_rate: int = -1, compression_level: int = -1, qscale: Optional[int] = None, gop_size: int = -1, max_b_frames: int = -1)[source]

Codec configuration.

bit_rate: int = -1

Bit rate

compression_level: int = -1

Compression level

qscale: Optional[int] = None

Global quality factor. Enables variable bit rate. Valid values depend on encoder.

For example: MP3 takes 0 - 9 (https://trac.ffmpeg.org/wiki/Encode/MP3) while libvorbis takes -1 - 10.

gop_size: int = -1

The number of pictures in a group of pictures, or 0 for intra_only

max_b_frames: int = -1

maximum number of B-frames between non-B-frames.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources