Shortcuts

set_capture_non_tensor_stack

class tensordict.set_capture_non_tensor_stack(mode: bool)

A context manager or decorator to control whether identical non-tensor data should be stacked into a single NonTensorData object or a NonTensorStack.

Parameters:

mode (bool) – Whether to capture non-tensor stacks. If False, identical non-tensor data will be stacked into a NonTensorStack. If True, a single NonTensorData object will contain the unique value, but with the desired batch-size. Defaults to True.

Note

Until v0.9, this will raise a warning if the same value is encountered and the value is not set explicitly (capture_non_tensor_stack() = True default behavior). You can set the value of capture_non_tensor_stack() through:

  • The CAPTURE_NON_TENSOR_STACK environment variable;

  • By setting set_capture_non_tensor_stack(val: bool).set() at the beginning of your script;

  • By using set_capture_non_tensor_stack(val: bool) as a context manager or a decorator.

It is recommended to use the set_capture_non_tensor_stack(False) behavior.

Examples

>>> with set_capture_non_tensor_stack(False):
...     torch.stack([NonTensorData("a"), NonTensorData("a")])
NonTensorData("a", batch_size=[2])
>>> @set_capture_non_tensor_stack(False)
... def my_function():
...     return torch.stack([NonTensorData("a"), NonTensorData("a")])
>>> my_function()
NonTensorStack(["a", "a"], stack_dim=0)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources