Categorical¶
- class torchrl.data.Categorical(n: int, shape: Optional[Size] = None, device: Optional[Union[device, str, int]] = None, dtype: str | torch.dtype = torch.int64, mask: Optional[Tensor] = None)[source]¶
A discrete tensor spec.
An alternative to
OneHot
for categorical variables in TorchRL. Categorical variables perform indexing insted of masking, which can speed-up computation and reduce memory cost for large categorical variables.The spec will have the shape defined by the
shape
argument: if a singleton dimension is desired for the training dimension, one should specify it explicitly.- Parameters:
n (int) – number of possible outcomes.
shape – (torch.Size, optional): shape of the variable, default is “torch.Size([])”.
device (str, int or torch.device, optional) – device of the tensors.
dtype (str or torch.dtype, optional) – dtype of the tensors.
mask (torch.Tensor or None) – mask some of the possible outcomes when a sample is taken. See
update_mask()
for more information.
Examples
>>> categ = Categorical(3) >>> categ Categorical( shape=torch.Size([]), space=CategoricalBox(n=3), device=cpu, dtype=torch.int64, domain=discrete) >>> categ.rand() tensor(2) >>> categ = Categorical(3, shape=(1,)) >>> categ Categorical( shape=torch.Size([1]), space=CategoricalBox(n=3), device=cpu, dtype=torch.int64, domain=discrete) >>> categ.rand() tensor([1])
- assert_is_in(value: Tensor) None ¶
Asserts whether a tensor belongs to the box, and raises an exception otherwise.
- Parameters:
value (torch.Tensor) – value to be checked.
- clear_device_() T ¶
A no-op for all leaf specs (which must have a device).
For
Composite
specs, this method will erase the device.
- clone() Categorical [source]¶
Creates a copy of the TensorSpec.
- contains(item: torch.Tensor | tensordict.base.TensorDictBase) bool ¶
If the value
val
could have been generated by theTensorSpec
, returnsTrue
, otherwiseFalse
.See
is_in()
for more information.
- cpu()¶
Casts the TensorSpec to ‘cpu’ device.
- cuda(device=None)¶
Casts the TensorSpec to ‘cuda’ device.
- device: torch.device | None = None¶
- encode(val: numpy.ndarray | torch.Tensor | tensordict.base.TensorDictBase, *, ignore_device: bool = False) torch.Tensor | tensordict.base.TensorDictBase ¶
Encodes a value given the specified spec, and return the corresponding tensor.
This method is to be used in environments that return a value (eg, a numpy array) that can be easily mapped to the TorchRL required domain. If the value is already a tensor, the spec will not change its value and return it as-is.
- Parameters:
val (np.ndarray or torch.Tensor) – value to be encoded as tensor.
- Keyword Arguments:
ignore_device (bool, optional) – if
True
, the spec device will be ignored. This is used to group tensor casting within a call toTensorDict(..., device="cuda")
which is faster.- Returns:
torch.Tensor matching the required tensor specs.
- expand(*shape)[source]¶
Returns a new Spec with the expanded shape.
- Parameters:
*shape (tuple or iterable of int) – the new shape of the Spec. Must be broadcastable with the current shape: its length must be at least as long as the current shape length, and its last values must be compliant too; ie they can only differ from it if the current dimension is a singleton.
- flatten(start_dim: int, end_dim: int) T ¶
Flattens a
TensorSpec
.Check
flatten()
for more information on this method.
- classmethod implements_for_spec(torch_function: Callable) Callable ¶
Register a torch function override for TensorSpec.
- abstract index(index: Union[int, Tensor, ndarray, slice, List], tensor_to_index: torch.Tensor | tensordict.base.TensorDictBase) torch.Tensor | tensordict.base.TensorDictBase ¶
Indexes the input tensor.
- Parameters:
index (int, torch.Tensor, slice or list) – index of the tensor
tensor_to_index – tensor to be indexed
- Returns:
indexed tensor
- is_in(val: Tensor) bool [source]¶
If the value
val
could have been generated by theTensorSpec
, returnsTrue
, otherwiseFalse
.More precisely, the
is_in
methods checks that the valueval
is within the limits defined by thespace
attribute (the box), and that thedtype
,device
,shape
potentially other metadata match those of the spec. If any of these checks fails, theis_in
method will returnFalse
.- Parameters:
val (torch.Tensor) – value to be checked.
- Returns:
boolean indicating if values belongs to the TensorSpec box.
- make_neg_dim(dim: int) T ¶
Converts a specific dimension to
-1
.
- property ndim: int¶
Number of dimensions of the spec shape.
Shortcut for
len(spec.shape)
.
- ndimension() int ¶
Number of dimensions of the spec shape.
Shortcut for
len(spec.shape)
.
- one(shape: Optional[Size] = None) torch.Tensor | tensordict.base.TensorDictBase ¶
Returns a one-filled tensor in the box.
Note
Even though there is no guarantee that
1
belongs to the spec domain, this method will not raise an exception when this condition is violated. The primary use case ofone
is to generate empty data buffers, not meaningful data.- Parameters:
shape (torch.Size) – shape of the one-tensor
- Returns:
a one-filled tensor sampled in the TensorSpec box.
- ones(shape: Optional[Size] = None) torch.Tensor | tensordict.base.TensorDictBase ¶
Proxy to
one()
.
- project(val: torch.Tensor | tensordict.base.TensorDictBase) torch.Tensor | tensordict.base.TensorDictBase ¶
If the input tensor is not in the TensorSpec box, it maps it back to it given some defined heuristic.
- Parameters:
val (torch.Tensor) – tensor to be mapped to the box.
- Returns:
a torch.Tensor belonging to the TensorSpec box.
- rand(shape: Optional[Size] = None) Tensor [source]¶
Returns a random tensor in the space defined by the spec.
The sampling will be done uniformly over the space, unless the box is unbounded in which case normal values will be drawn.
- Parameters:
shape (torch.Size) – shape of the random tensor
- Returns:
a random tensor sampled in the TensorSpec box.
- sample(shape: Optional[Size] = None) torch.Tensor | tensordict.base.TensorDictBase ¶
Returns a random tensor in the space defined by the spec.
See
rand()
for details.
- squeeze(dim=None)[source]¶
Returns a new Spec with all the dimensions of size
1
removed.When
dim
is given, a squeeze operation is done only in that dimension.- Parameters:
dim (int or None) – the dimension to apply the squeeze operation to
- to(dest: Union[dtype, device, str, int]) Categorical [source]¶
Casts a TensorSpec to a device or a dtype.
Returns the same spec if no change is made.
- to_categorical_spec() Categorical [source]¶
No-op for categorical.
- to_numpy(val: Tensor, safe: Optional[bool] = None) dict [source]¶
Returns the
np.ndarray
correspondent of an input tensor.This is intended to be the inverse operation of
encode()
.- Parameters:
val (torch.Tensor) – tensor to be transformed_in to numpy.
safe (bool) – boolean value indicating whether a check should be performed on the value against the domain of the spec. Defaults to the value of the
CHECK_SPEC_ENCODE
environment variable.
- Returns:
a np.ndarray.
- to_one_hot(val: Tensor, safe: Optional[bool] = None) Tensor [source]¶
Encodes a discrete tensor from the spec domain into its one-hot correspondent.
- Parameters:
val (torch.Tensor, optional) – Tensor to one-hot encode.
safe (bool) – boolean value indicating whether a check should be performed on the value against the domain of the spec. Defaults to the value of the
CHECK_SPEC_ENCODE
environment variable.
- Returns:
The one-hot encoded tensor.
Examples
>>> categ = Categorical(3) >>> categ_sample = categ.zero() >>> categ_sample tensor(0) >>> onehot_sample = categ.to_one_hot(categ_sample) >>> onehot_sample tensor([ True, False, False])
- to_one_hot_spec() OneHot [source]¶
Converts the spec to the equivalent one-hot spec.
Examples
>>> categ = Categorical(3) >>> categ.to_one_hot_spec() OneHot( shape=torch.Size([3]), space=CategoricalBox(n=3), device=cpu, dtype=torch.bool, domain=discrete)
- type_check(value: Tensor, key: Optional[NestedKey] = None) None ¶
Checks the input value
dtype
against theTensorSpec
dtype
and raises an exception if they don’t match.- Parameters:
value (torch.Tensor) – tensor whose dtype has to be checked.
key (str, optional) – if the TensorSpec has keys, the value dtype will be checked against the spec pointed by the indicated key.
- unflatten(dim: int, sizes: Tuple[int]) T ¶
Unflattens a
TensorSpec
.Check
unflatten()
for more information on this method.
- unsqueeze(dim: int)[source]¶
Returns a new Spec with one more singleton dimension (at the position indicated by
dim
).- Parameters:
dim (int or None) – the dimension to apply the unsqueeze operation to.
- update_mask(mask)[source]¶
Sets a mask to prevent some of the possible outcomes when a sample is taken.
The mask can also be set during initialization of the spec.
- Parameters:
mask (torch.Tensor or None) – boolean mask. If None, the mask is disabled. Otherwise, the shape of the mask must be expandable to the shape of the equivalent one-hot spec.
False
masks an outcome andTrue
leaves the outcome unmasked. If all of the possible outcomes are masked, then an error is raised when a sample is taken.
Examples
>>> mask = torch.tensor([True, False, True]) >>> ts = Categorical(3, (10,), dtype=torch.int64, mask=mask) >>> # One of the three possible outcomes is masked >>> ts.rand() tensor([0, 2, 2, 0, 2, 0, 2, 2, 0, 2])
- zero(shape: Optional[Size] = None) torch.Tensor | tensordict.base.TensorDictBase ¶
Returns a zero-filled tensor in the box.
Note
Even though there is no guarantee that
0
belongs to the spec domain, this method will not raise an exception when this condition is violated. The primary use case ofzero
is to generate empty data buffers, not meaningful data.- Parameters:
shape (torch.Size) – shape of the zero-tensor
- Returns:
a zero-filled tensor sampled in the TensorSpec box.
- zeros(shape: Optional[Size] = None) torch.Tensor | tensordict.base.TensorDictBase ¶
Proxy to
zero()
.