Shortcuts

MultiCategorical

class torchrl.data.MultiCategorical(nvec: Union[Sequence[int], Tensor, int], shape: Optional[Size] = None, device: Optional[Union[device, str, int]] = None, dtype: Optional[Union[str, dtype]] = torch.int64, mask: Optional[Tensor] = None, remove_singleton: bool = True)[source]

A concatenation of discrete tensor spec.

Parameters:
  • nvec (iterable of integers or torch.Tensor) – cardinality of each of the elements of the tensor. Can have several axes.

  • shape (torch.Size, optional) – total shape of the sampled tensors. If provided, the last m dimensions must match nvec.shape.

  • device (str, int or torch.device, optional) – device of the tensors.

  • dtype (str or torch.dtype, optional) – dtype of the tensors.

  • remove_singleton (bool, optional) – if True, singleton samples (of size [1]) will be squeezed. Defaults to True.

  • mask (torch.Tensor or None) – mask some of the possible outcomes when a sample is taken. See update_mask() for more information.

Examples

>>> ts = MultiCategorical((3, 2, 3))
>>> ts.is_in(torch.tensor([2, 0, 1]))
True
>>> ts.is_in(torch.tensor([2, 10, 1]))
False
assert_is_in(value: Tensor) None

Asserts whether a tensor belongs to the box, and raises an exception otherwise.

Parameters:

value (torch.Tensor) – value to be checked.

clear_device_() T

A no-op for all leaf specs (which must have a device).

For Composite specs, this method will erase the device.

clone() MultiCategorical[source]

Creates a copy of the TensorSpec.

contains(item: torch.Tensor | tensordict.base.TensorDictBase) bool

If the value val could have been generated by the TensorSpec, returns True, otherwise False.

See is_in() for more information.

cpu()

Casts the TensorSpec to ‘cpu’ device.

cuda(device=None)

Casts the TensorSpec to ‘cuda’ device.

device: torch.device | None = None
encode(val: numpy.ndarray | torch.Tensor | tensordict.base.TensorDictBase, *, ignore_device: bool = False) torch.Tensor | tensordict.base.TensorDictBase

Encodes a value given the specified spec, and return the corresponding tensor.

This method is to be used in environments that return a value (eg, a numpy array) that can be easily mapped to the TorchRL required domain. If the value is already a tensor, the spec will not change its value and return it as-is.

Parameters:

val (np.ndarray or torch.Tensor) – value to be encoded as tensor.

Keyword Arguments:

ignore_device (bool, optional) – if True, the spec device will be ignored. This is used to group tensor casting within a call to TensorDict(..., device="cuda") which is faster.

Returns:

torch.Tensor matching the required tensor specs.

expand(*shape)[source]

Returns a new Spec with the expanded shape.

Parameters:

*shape (tuple or iterable of int) – the new shape of the Spec. Must be broadcastable with the current shape: its length must be at least as long as the current shape length, and its last values must be compliant too; ie they can only differ from it if the current dimension is a singleton.

flatten(start_dim: int, end_dim: int) T

Flattens a TensorSpec.

Check flatten() for more information on this method.

classmethod implements_for_spec(torch_function: Callable) Callable

Register a torch function override for TensorSpec.

abstract index(index: Union[int, Tensor, ndarray, slice, List], tensor_to_index: torch.Tensor | tensordict.base.TensorDictBase) torch.Tensor | tensordict.base.TensorDictBase

Indexes the input tensor.

Parameters:
  • index (int, torch.Tensor, slice or list) – index of the tensor

  • tensor_to_index – tensor to be indexed

Returns:

indexed tensor

is_in(val: Tensor) bool[source]

If the value val could have been generated by the TensorSpec, returns True, otherwise False.

More precisely, the is_in methods checks that the value val is within the limits defined by the space attribute (the box), and that the dtype, device, shape potentially other metadata match those of the spec. If any of these checks fails, the is_in method will return False.

Parameters:

val (torch.Tensor) – value to be checked.

Returns:

boolean indicating if values belongs to the TensorSpec box.

make_neg_dim(dim: int) T

Converts a specific dimension to -1.

property ndim: int

Number of dimensions of the spec shape.

Shortcut for len(spec.shape).

ndimension() int

Number of dimensions of the spec shape.

Shortcut for len(spec.shape).

one(shape: Optional[Size] = None) torch.Tensor | tensordict.base.TensorDictBase

Returns a one-filled tensor in the box.

Note

Even though there is no guarantee that 1 belongs to the spec domain, this method will not raise an exception when this condition is violated. The primary use case of one is to generate empty data buffers, not meaningful data.

Parameters:

shape (torch.Size) – shape of the one-tensor

Returns:

a one-filled tensor sampled in the TensorSpec box.

ones(shape: Optional[Size] = None) torch.Tensor | tensordict.base.TensorDictBase

Proxy to one().

project(val: torch.Tensor | tensordict.base.TensorDictBase) torch.Tensor | tensordict.base.TensorDictBase

If the input tensor is not in the TensorSpec box, it maps it back to it given some defined heuristic.

Parameters:

val (torch.Tensor) – tensor to be mapped to the box.

Returns:

a torch.Tensor belonging to the TensorSpec box.

rand(shape: Optional[Size] = None) Tensor[source]

Returns a random tensor in the space defined by the spec.

The sampling will be done uniformly over the space, unless the box is unbounded in which case normal values will be drawn.

Parameters:

shape (torch.Size) – shape of the random tensor

Returns:

a random tensor sampled in the TensorSpec box.

reshape(*shape) T

Reshapes a TensorSpec.

Check reshape() for more information on this method.

sample(shape: Optional[Size] = None) torch.Tensor | tensordict.base.TensorDictBase

Returns a random tensor in the space defined by the spec.

See rand() for details.

squeeze(dim: Optional[int] = None)[source]

Returns a new Spec with all the dimensions of size 1 removed.

When dim is given, a squeeze operation is done only in that dimension.

Parameters:

dim (int or None) – the dimension to apply the squeeze operation to

to(dest: Union[dtype, device, str, int]) MultiCategorical[source]

Casts a TensorSpec to a device or a dtype.

Returns the same spec if no change is made.

to_categorical(val: Tensor, safe: Optional[bool] = None) MultiCategorical[source]

Not op for MultiCategorical.

to_categorical_spec() MultiCategorical[source]

Not op for MultiCategorical.

to_numpy(val: Tensor, safe: Optional[bool] = None) dict

Returns the np.ndarray correspondent of an input tensor.

This is intended to be the inverse operation of encode().

Parameters:
  • val (torch.Tensor) – tensor to be transformed_in to numpy.

  • safe (bool) – boolean value indicating whether a check should be performed on the value against the domain of the spec. Defaults to the value of the CHECK_SPEC_ENCODE environment variable.

Returns:

a np.ndarray.

to_one_hot(val: Tensor, safe: Optional[bool] = None) Union[MultiOneHot, Tensor][source]

Encodes a discrete tensor from the spec domain into its one-hot correspondent.

Parameters:
  • val (torch.Tensor, optional) – Tensor to one-hot encode.

  • safe (bool) – boolean value indicating whether a check should be performed on the value against the domain of the spec. Defaults to the value of the CHECK_SPEC_ENCODE environment variable.

Returns:

The one-hot encoded tensor.

to_one_hot_spec() MultiOneHot[source]

Converts the spec to the equivalent one-hot spec.

type_check(value: Tensor, key: Optional[NestedKey] = None) None

Checks the input value dtype against the TensorSpec dtype and raises an exception if they don’t match.

Parameters:
  • value (torch.Tensor) – tensor whose dtype has to be checked.

  • key (str, optional) – if the TensorSpec has keys, the value dtype will be checked against the spec pointed by the indicated key.

unflatten(dim: int, sizes: Tuple[int]) T

Unflattens a TensorSpec.

Check unflatten() for more information on this method.

unsqueeze(dim: int)[source]

Returns a new Spec with one more singleton dimension (at the position indicated by dim).

Parameters:

dim (int or None) – the dimension to apply the unsqueeze operation to.

update_mask(mask)[source]

Sets a mask to prevent some of the possible outcomes when a sample is taken.

The mask can also be set during initialization of the spec.

Parameters:

mask (torch.Tensor or None) – boolean mask. If None, the mask is disabled. Otherwise, the shape of the mask must be expandable to the shape of the equivalent one-hot spec. False masks an outcome and True leaves the outcome unmasked. If all of the possible outcomes are masked, then an error is raised when a sample is taken.

Examples

>>> torch.manual_seed(0)
>>> mask = torch.tensor([False, False, True,
...                      True, True])
>>> ts = MultiCategorical((3, 2), (5, 2,), dtype=torch.int64, mask=mask)
>>> # All but one of the three possible outcomes for the first
>>> # group are masked, but neither of the two possible
>>> # outcomes for the second group are masked.
>>> ts.rand()
tensor([[2, 1],
        [2, 0],
        [2, 1],
        [2, 1],
        [2, 1]])
view(*shape) T

Reshapes a TensorSpec.

Check reshape() for more information on this method.

zero(shape: Optional[Size] = None) torch.Tensor | tensordict.base.TensorDictBase

Returns a zero-filled tensor in the box.

Note

Even though there is no guarantee that 0 belongs to the spec domain, this method will not raise an exception when this condition is violated. The primary use case of zero is to generate empty data buffers, not meaningful data.

Parameters:

shape (torch.Size) – shape of the zero-tensor

Returns:

a zero-filled tensor sampled in the TensorSpec box.

zeros(shape: Optional[Size] = None) torch.Tensor | tensordict.base.TensorDictBase

Proxy to zero().

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources