Shortcuts

VideoRecorder

torchrl.record.VideoRecorder(logger: Logger, tag: str, in_keys: Optional[Sequence[NestedKey]] = None, skip: int | None = None, center_crop: Optional[int] = None, make_grid: bool | None = None, out_keys: Optional[Sequence[NestedKey]] = None, **kwargs) None[source]

Video Recorder transform.

Will record a series of observations from an environment and write them to a Logger object when needed.

Parameters:
  • logger (Logger) – a Logger instance where the video should be written. To save the video under a memmap tensor or an mp4 file, use the CSVLogger class.

  • tag (str) – the video tag in the logger.

  • in_keys (Sequence of NestedKey, optional) – keys to be read to produce the video. Default is "pixels".

  • skip (int) – frame interval in the output video. Default is 2 if the transform has a parent environment, and 1 if not.

  • center_crop (int, optional) – value of square center crop.

  • make_grid (bool, optional) – if True, a grid is created assuming that a tensor of shape [B x W x H x 3] is provided, with B being the batch size. Default is True if the transform has a parent environment, and False if not.

  • out_keys (sequence of NestedKey, optional) – destination keys. Defaults to in_keys if not provided.

Examples

The following example shows how to save a rollout under a video. First a few imports:

>>> from torchrl.record import VideoRecorder
>>> from torchrl.record.loggers.csv import CSVLogger
>>> from torchrl.envs import TransformedEnv, DMControlEnv

The video format is chosen in the logger. Wandb and tensorboard will take care of that on their own, CSV accepts various video formats.

>>> logger = CSVLogger(exp_name="cheetah", log_dir="cheetah_videos", video_format="mp4")

Some envs (eg, Atari games) natively return images, some require the user to ask for them. Check GymEnv or DMControlEnv to see how to render images in these contexts.

>>> base_env = DMControlEnv("cheetah", "run", from_pixels=True)
>>> env = TransformedEnv(base_env, VideoRecorder(logger=logger, tag="run_video"))
>>> env.rollout(100)

All transforms have a dump function, mostly a no-op except for VideoRecorder, and Compose which will dispatch the dumps to all its members.

>>> env.transform.dump()

The transform can also be used within a dataset to save the video collected. Unlike in the environment case, images will come in a batch. The skip argument will enable to save the images only at specific intervals.

>>> from torchrl.data.datasets import OpenXExperienceReplay
>>> from torchrl.envs import Compose
>>> from torchrl.record import VideoRecorder, CSVLogger
>>> # Create a logger that saves videos as mp4
>>> logger = CSVLogger("./dump", video_format="mp4")
>>> # We use the VideoRecorder transform to save register the images coming from the batch.
>>> t = VideoRecorder(logger=logger, tag="pixels", in_keys=[("next", "observation", "image")])
>>> # Each batch of data will have 10 consecutive videos of 200 frames each (maximum, since strict_length=False)
>>> dataset = OpenXExperienceReplay("cmu_stretch", batch_size=2000, slice_len=200,
...             download=True, strict_length=False,
...             transform=t)
>>> # Get a batch of data and visualize it
>>> for data in dataset:
...     t.dump()
...     break

Our video is available under ./cheetah_videos/cheetah/videos/run_video_0.mp4!

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources