NonTensor
- class torchrl.data.NonTensor(shape: torch.Size | int = torch.Size([1]), device: DEVICE_TYPING | None = None, dtype: torch.dtype | None = None, example_data: Any = None, batched: bool = False, **kwargs)[source]
A spec for non-tensor data.
The NonTensor class is designed to handle specifications for data that do not conform to standard tensor structures. It maintains attributes such as shape, and device similar to the NonTensorData class. The dtype is optional and should in practice be left to None in most cases. Methods like rand, zero, and one will return a NonTensorData object with a None data value.
Warning
The default shape of NonTensor is (1,).
- Parameters:
shape (Union[torch.Size, int], optional) – The shape of the non-tensor data. Defaults to (1,).
device (Optional[DEVICE_TYPING], optional) – The device on which the data is stored. Defaults to None.
dtype (torch.dtype | None, optional) – The data type of the non-tensor data. Defaults to None.
example_data (Any, optional) – An example of the data that this spec represents. This example is used as a template when generating new data with the rand, zero, and one methods.
batched (bool, optional) – Indicates whether the data is batched. If True, the rand, zero, and one methods will generate data with an additional batch dimension, stacking copies of the example_data across this dimension. Defaults to False.
**kwargs – Additional keyword arguments passed to the parent class.
See also
Choice
which allows to randomly choose among different specs when calling rand.Examples
>>> from torchrl.data import NonTensor >>> spec = NonTensor(example_data="a string", batched=False, shape=(3,)) >>> spec.rand() NonTensorData(data=a string, batch_size=torch.Size([3]), device=None) >>> spec = NonTensor(example_data="a string", batched=True, shape=(3,)) >>> spec.rand() NonTensorStack( ['a string', 'a string', 'a string'], batch_size=torch.Size([3]), device=None)
- assert_is_in(value: Tensor) None
Asserts whether a tensor belongs to the box, and raises an exception otherwise.
- Parameters:
value (torch.Tensor) – value to be checked.
- cardinality() Any [source]
The cardinality of the spec.
This refers to the number of possible outcomes in a spec. It is assumed that the cardinality of a composite spec is the cartesian product of all possible outcomes.
- clear_device_() T
A no-op for all leaf specs (which must have a device).
For
Composite
specs, this method will erase the device.
- contains(item: torch.Tensor | TensorDictBase) bool
If the value
val
could have been generated by theTensorSpec
, returnsTrue
, otherwiseFalse
.See
is_in()
for more information.
- property device: device
The device of the spec.
Only
Composite
specs can have aNone
device. All leaves must have a non-null device.
- encode(val: np.ndarray | torch.Tensor | TensorDictBase, *, ignore_device: bool = False) torch.Tensor | TensorDictBase [source]
Encodes a value given the specified spec, and return the corresponding tensor.
This method is to be used in environments that return a value (eg, a numpy array) that can be easily mapped to the TorchRL required domain. If the value is already a tensor, the spec will not change its value and return it as-is.
- Parameters:
val (np.ndarray or torch.Tensor) – value to be encoded as tensor.
- Keyword Arguments:
ignore_device (bool, optional) – if
True
, the spec device will be ignored. This is used to group tensor casting within a call toTensorDict(..., device="cuda")
which is faster.- Returns:
torch.Tensor matching the required tensor specs.
- enumerate(use_mask: bool = False) Any [source]
Returns all the samples that can be obtained from the TensorSpec.
The samples will be stacked along the first dimension.
This method is only implemented for discrete specs.
- Parameters:
use_mask (bool, optional) – If
True
and the spec has a mask, samples that are masked are excluded. Default isFalse
.
- expand(*shape)[source]
Returns a new Spec with the expanded shape.
- Parameters:
*shape (tuple or iterable of int) – the new shape of the Spec. Must be broadcastable with the current shape: its length must be at least as long as the current shape length, and its last values must be compliant too; ie they can only differ from it if the current dimension is a singleton.
- flatten(start_dim: int, end_dim: int) T
Flattens a
TensorSpec
.Check
flatten()
for more information on this method.
- classmethod implements_for_spec(torch_function: Callable) Callable
Register a torch function override for TensorSpec.
- index(index: INDEX_TYPING, tensor_to_index: torch.Tensor | TensorDictBase) torch.Tensor | TensorDictBase [source]
Indexes the input tensor.
This method is to be used with specs that encode one or more categorical variables (e.g.,
OneHot
orCategorical
), such that indexing of a tensor with a sample can be done without caring about the actual representation of the index.- Parameters:
index (int, torch.Tensor, slice or list) – index of the tensor
tensor_to_index – tensor to be indexed
- Returns:
indexed tensor
- Exanples:
>>> from torchrl.data import OneHot >>> import torch >>> >>> one_hot = OneHot(n=100) >>> categ = one_hot.to_categorical_spec() >>> idx_one_hot = torch.zeros((100,), dtype=torch.bool) >>> idx_one_hot[50] = 1 >>> print(one_hot.index(idx_one_hot, torch.arange(100))) tensor(50) >>> idx_categ = one_hot.to_categorical(idx_one_hot) >>> print(categ.index(idx_categ, torch.arange(100))) tensor(50)
- is_in(val: Any) bool [source]
If the value
val
could have been generated by theTensorSpec
, returnsTrue
, otherwiseFalse
.More precisely, the
is_in
methods checks that the valueval
is within the limits defined by thespace
attribute (the box), and that thedtype
,device
,shape
potentially other metadata match those of the spec. If any of these checks fails, theis_in
method will returnFalse
.- Parameters:
val (torch.Tensor) – value to be checked.
- Returns:
boolean indicating if values belongs to the TensorSpec box.
- one(shape=None)[source]
Returns a one-filled tensor in the box.
Note
Even though there is no guarantee that
1
belongs to the spec domain, this method will not raise an exception when this condition is violated. The primary use case ofone
is to generate empty data buffers, not meaningful data.- Parameters:
shape (torch.Size) – shape of the one-tensor
- Returns:
a one-filled tensor sampled in the TensorSpec box.
- ones(shape: torch.Size = None) torch.Tensor | TensorDictBase
Proxy to
one()
.
- project(val: torch.Tensor | TensorDictBase) torch.Tensor | TensorDictBase
If the input tensor is not in the TensorSpec box, it maps it back to it given some defined heuristic.
- Parameters:
val (torch.Tensor) – tensor to be mapped to the box.
- Returns:
a torch.Tensor belonging to the TensorSpec box.
- rand(shape=None)[source]
Returns a random tensor in the space defined by the spec.
The sampling will be done uniformly over the space, unless the box is unbounded in which case normal values will be drawn.
- Parameters:
shape (torch.Size) – shape of the random tensor
- Returns:
a random tensor sampled in the TensorSpec box.
- reshape(*shape) T
Reshapes a
TensorSpec
.Check
reshape()
for more information on this method.
- sample(shape: torch.Size = None) torch.Tensor | TensorDictBase
Returns a random tensor in the space defined by the spec.
See
rand()
for details.
- squeeze(dim: int | None = None) NonTensor [source]
Returns a new Spec with all the dimensions of size
1
removed.When
dim
is given, a squeeze operation is done only in that dimension.- Parameters:
dim (int or None) – the dimension to apply the squeeze operation to
- to(dest: torch.dtype | DEVICE_TYPING) NonTensor [source]
Casts a TensorSpec to a device or a dtype.
Returns the same spec if no change is made.
- to_numpy(val: torch.Tensor | TensorDictBase, safe: bool = None) np.ndarray | dict [source]
Returns the
np.ndarray
correspondent of an input tensor.This is intended to be the inverse operation of
encode()
.- Parameters:
val (torch.Tensor) – tensor to be transformed_in to numpy.
safe (bool) – boolean value indicating whether a check should be performed on the value against the domain of the spec. Defaults to the value of the
CHECK_SPEC_ENCODE
environment variable.
- Returns:
a np.ndarray.
- type_check(value: Tensor, key: Optional[NestedKey] = None) None
Checks the input value
dtype
against theTensorSpec
dtype
and raises an exception if they don’t match.- Parameters:
value (torch.Tensor) – tensor whose dtype has to be checked.
key (str, optional) – if the TensorSpec has keys, the value dtype will be checked against the spec pointed by the indicated key.
- unflatten(dim: int, sizes: tuple[int]) T
Unflattens a
TensorSpec
.Check
unflatten()
for more information on this method.
- unsqueeze(dim: int) NonTensor [source]
Returns a new Spec with one more singleton dimension (at the position indicated by
dim
).- Parameters:
dim (int or None) – the dimension to apply the unsqueeze operation to.
- view(*shape) T
Reshapes a
TensorSpec
.Check
reshape()
for more information on this method.
- zero(shape=None)[source]
Returns a zero-filled tensor in the box.
Note
Even though there is no guarantee that
0
belongs to the spec domain, this method will not raise an exception when this condition is violated. The primary use case ofzero
is to generate empty data buffers, not meaningful data.- Parameters:
shape (torch.Size) – shape of the zero-tensor
- Returns:
a zero-filled tensor sampled in the TensorSpec box.
- zeros(shape: torch.Size = None) torch.Tensor | TensorDictBase
Proxy to
zero()
.