VideoReader¶
- class torchvision.io.VideoReader(src: str = '', stream: str = 'video', num_threads: int = 0, path: Optional[str] = None)[source]¶
Fine-grained video-reading API. Supports frame-by-frame reading of various streams from a single video container. Much like previous video_reader API it supports the following backends: video_reader, pyav, and cuda. Backends can be set via torchvision.set_video_backend function.
Warning
The VideoReader class is in Beta stage, and backward compatibility is not guaranteed.
Example
The following examples creates a
VideoReader
object, seeks into 2s point, and returns a single frame:import torchvision video_path = "path_to_a_test_video" reader = torchvision.io.VideoReader(video_path, "video") reader.seek(2.0) frame = next(reader)
VideoReader
implements the iterable API, which makes it suitable to using it in conjunction withitertools
for more advanced reading. As such, we can use aVideoReader
instance inside for loops:reader.seek(2) for frame in reader: frames.append(frame['data']) # additionally, `seek` implements a fluent API, so we can do for frame in reader.seek(2): frames.append(frame['data'])
With
itertools
, we can read all frames between 2 and 5 seconds with the following code:for frame in itertools.takewhile(lambda x: x['pts'] <= 5, reader.seek(2)): frames.append(frame['data'])
and similarly, reading 10 frames after the 2s timestamp can be achieved as follows:
for frame in itertools.islice(reader.seek(2), 10): frames.append(frame['data'])
Note
Each stream descriptor consists of two parts: stream type (e.g. ‘video’) and a unique stream id (which are determined by the video encoding). In this way, if the video contaner contains multiple streams of the same type, users can access the one they want. If only stream type is passed, the decoder auto-detects first stream of that type.
- Parameters:
src (string, bytes object, or tensor) – The media source. If string-type, it must be a file path supported by FFMPEG. If bytes should be an in memory representatin of a file supported by FFMPEG. If Tensor, it is interpreted internally as byte buffer. It must be one-dimensional, of type
torch.uint8
.stream (string, optional) – descriptor of the required stream, followed by the stream id, in the format
{stream_type}:{stream_id}
. Defaults to"video:0"
. Currently available options include['video', 'audio']
num_threads (int, optional) – number of threads used by the codec to decode video. Default value (0) enables multithreading with codec-dependent heuristic. The performance will depend on the version of FFMPEG codecs supported.
path (str, optional) –
Examples using
VideoReader
:Optical Flow: Predicting movement with the RAFT model
Optical Flow: Predicting movement with the RAFT modelVideo API- get_metadata() Dict[str, Any] [source]¶
Returns video metadata
- Returns:
dictionary containing duration and frame rate for every stream
- Return type:
(dict)
- seek(time_s: float, keyframes_only: bool = False) VideoReader [source]¶
Seek within current stream.
- Parameters:
Note
Current implementation is the so-called precise seek. This means following seek, call to
next()
will return the frame with the exact timestamp if it exists or the first frame with timestamp larger thantime_s
.
- set_current_stream(stream: str) bool [source]¶
Set current stream. Explicitly define the stream we are operating on.
- Parameters:
stream (string) – descriptor of the required stream. Defaults to
"video:0"
Currently available stream types include['video', 'audio']
. Each descriptor consists of two parts: stream type (e.g. ‘video’) and a unique stream id (which are determined by video encoding). In this way, if the video contaner contains multiple streams of the same type, users can access the one they want. If only stream type is passed, the decoder auto-detects first stream of that type and returns it.- Returns:
True on success, False otherwise
- Return type:
(bool)