class torchrl.envs.transforms.ToTensorImage(from_int: Optional[bool] = None, unsqueeze: bool = False, dtype: Optional[device] = None, in_keys: Optional[Sequence[Union[str, Tuple[str, ...]]]] = None, out_keys: Optional[Sequence[Union[str, Tuple[str, ...]]]] = None)[source]

Transforms a numpy-like image (W x H x C) to a pytorch image (C x W x H).

Transforms an observation image from a (… x W x H x C) tensor to a (… x C x W x H) tensor. Optionally, scales the input tensor from the range [0, 255] to the range [0.0, 1.0] (see from_int for more details).

In the other cases, tensors are returned without scaling.

  • from_int (bool, optional) – if True, the tensor will be scaled from the range [0, 255] to the range [0.0, 1.0]. if False`, the tensor will not be scaled. if None, the tensor will be scaled if it’s a floating-point tensor. default=None.

  • unsqueeze (bool) – if True, the observation tensor is unsqueezed along the first dimension. default=False.

  • dtype (torch.dtype, optional) – dtype to use for the resulting observations.


>>> transform = ToTensorImage(in_keys=["pixels"])
>>> ri = torch.randint(0, 255, (1 , 1, 10, 11, 3), dtype=torch.uint8)
>>> td = TensorDict(
...     {"pixels": ri},
...     [1, 1])
>>> _ = transform(td)
>>> obs = td.get("pixels")
>>> print(obs.shape, obs.dtype)
torch.Size([1, 1, 3, 10, 11]) torch.float32
transform_observation_spec(observation_spec: TensorSpec) TensorSpec[source]

Transforms the observation spec such that the resulting spec matches transform mapping.


observation_spec (TensorSpec) – spec before the transform


expected spec after the transform


Access comprehensive developer documentation for PyTorch

View Docs


Get in-depth tutorials for beginners and advanced developers

View Tutorials


Find development resources and get your questions answered

View Resources