set_capture_non_tensor_stack
- class tensordict.set_capture_non_tensor_stack(mode: bool)
A context manager or decorator to control whether identical non-tensor data should be stacked into a single NonTensorData object or a NonTensorStack.
- Parameters:
mode (bool) – Whether to capture non-tensor stacks. If
False
, identical non-tensor data will be stacked into aNonTensorStack
. IfTrue
, a singleNonTensorData
object will contain the unique value, but with the desired batch-size. Defaults toTrue
.
Note
Until v0.9, this will raise a warning if the same value is encountered and the value is not set explicitly (capture_non_tensor_stack() = True default behavior). You can set the value of
capture_non_tensor_stack()
through:The
CAPTURE_NON_TENSOR_STACK
environment variable;By setting
set_capture_non_tensor_stack(val: bool).set()
at the beginning of your script;By using
set_capture_non_tensor_stack(val: bool)
as a context manager or a decorator.
It is recommended to use the set_capture_non_tensor_stack(False) behavior.
See also
Examples
>>> with set_capture_non_tensor_stack(False): ... torch.stack([NonTensorData("a"), NonTensorData("a")]) NonTensorData("a", batch_size=[2]) >>> @set_capture_non_tensor_stack(False) ... def my_function(): ... return torch.stack([NonTensorData("a"), NonTensorData("a")]) >>> my_function() NonTensorStack(["a", "a"], stack_dim=0)