Shortcuts

tensordict package

The TensorDict class simplifies the process of passing multiple tensors from module to module by packing them in a dictionary-like object that inherits features from regular pytorch tensors.

TensorDictBase()

TensorDictBase is an abstract parent class for TensorDicts, a torch.Tensor data container.

TensorDict([source, batch_size, device, ...])

A batched dictionary of tensors.

LazyStackedTensorDict(*tensordicts[, ...])

A Lazy stack of TensorDicts.

PersistentTensorDict(*[, batch_size, ...])

Persistent TensorDict implementation.

TensorDictParams(parameters, *[, ...])

Holds a TensorDictBase instance full of parameters.

get_defaults_to_none([set_to_none])

Returns the status of get default value.

Constructors and handlers

The library offers a few method to interact with other data structures such as numpy structured arrays, namedtuples or h5 files. The library also exposes dedicated functions to manipulate tensordicts such as save, load, stack or cat.

cat(input[, dim, out])

Concatenates tensordicts into a single tensordict along the given dimension.

from_consolidated(filename)

Reconstructs a tensordict from a consolidated file.

from_dict(input_dict[, batch_size, device, ...])

Returns a TensorDict created from a dictionary or another TensorDict.

from_h5(filename[, mode])

Creates a PersistentTensorDict from a h5 file.

from_module(module[, as_module, lock, ...])

Copies the params and buffers of a module in a tensordict.

from_modules(*modules[, as_module, lock, ...])

Retrieves the parameters of several modules for ensebmle learning/feature of expects applications through vmap.

from_namedtuple(named_tuple, *[, ...])

Converts a namedtuple to a TensorDict recursively.

from_pytree(pytree, *[, batch_size, ...])

Converts a pytree to a TensorDict instance.

from_struct_array(struct_array[, device])

Converts a structured numpy array to a TensorDict.

fromkeys(keys[, value])

Creates a tensordict from a list of keys and a single value.

is_batchedtensor(arg0)

lazy_stack(input[, dim, out])

Creates a lazy stack of tensordicts.

load(prefix[, device, non_blocking, out])

Loads a tensordict from disk.

load_memmap(prefix[, device, non_blocking, out])

Loads a memory-mapped tensordict from disk.

maybe_dense_stack(input[, dim, out])

Attempts to make a dense stack of tensordicts, and falls back on lazy stack when required..

memmap(data[, prefix, copy_existing, ...])

Writes all tensors onto a corresponding memory-mapped Tensor in a new tensordict.

save(data[, prefix, copy_existing, ...])

Saves the tensordict to disk.

stack(input[, dim, out])

Stacks tensordicts into a single tensordict along the given dimension.

TensorDict as a context manager

TensorDict can be used as a context manager in situations where an action has to be done and then undone. This include temporarily locking/unlocking a tensordict

>>> data.lock_()  # data.set will result in an exception
>>> with data.unlock_():
...     data.set("key", value)
>>> assert data.is_locked()

or to execute functional calls with a TensorDict instance containing the parameters and buffers of a model:

>>> params = TensorDict.from_module(module).clone()
>>> params.zero_()
>>> with params.to_module(module):
...     y = module(x)

In the first example, we can modify the tensordict data because we have temporarily unlocked it. In the second example, we populate the module with the parameters and buffers contained in the params tensordict instance, and reset the original parameters after this call is completed.

Memory-mapped tensors

tensordict offers the MemoryMappedTensor primitive which allows you to work with tensors stored in physical memory in a handy way. The main advantages of MemoryMappedTensor are its ease of construction (no need to handle the storage of a tensor), the possibility to work with big contiguous data that would not fit in memory, an efficient (de)serialization across processes and efficient indexing of stored tensors.

If all workers have access to the same storage (both in multiprocess and distributed settings), passing a MemoryMappedTensor will just consist in passing a reference to a file on disk plus a bunch of extra meta-data for reconstructing it. The same goes with indexed memory-mapped tensors as long as the data-pointer of their storage is the same as the original one.

Indexing memory-mapped tensors is much faster than loading several independent files from the disk and does not require to load the full content of the array in memory. However, physical storage of PyTorch tensors should not be any different:

>>> my_images = MemoryMappedTensor.empty((1_000_000, 3, 480, 480), dtype=torch.unint8)
>>> mini_batch = my_images[:10]  # just reads the first 10 images of the dataset

MemoryMappedTensor(source, *[, dtype, ...])

A Memory-mapped Tensor.

Utils

utils.expand_as_right(tensor, dest)

Expand a tensor on the right to match another tensor shape.

utils.expand_right(tensor, shape)

Expand a tensor on the right to match a desired shape.

utils.isin(input, reference, key[, dim])

Tests if each element of key in input dim is also present in the reference.

utils.remove_duplicates(input, key[, dim, ...])

Removes indices duplicated in key along the specified dimension.

is_batchedtensor(arg0)

is_tensor_collection(datatype)

Checks if a data object or a type is a tensor container from the tensordict lib.

make_tensordict([input_dict, batch_size, device])

Returns a TensorDict created from the keyword arguments or an input dictionary.

merge_tensordicts(*tensordicts[, callback_exist])

Merges tensordicts together.

pad(tensordict, pad_size[, value])

Pads all tensors in a tensordict along the batch dimensions with a constant value, returning a new tensordict.

pad_sequence(list_of_tensordicts[, pad_dim, ...])

Pads a list of tensordicts in order for them to be stacked together in a contiguous format.

dense_stack_tds(td_list[, dim])

Densely stack a list of TensorDictBase objects (or a LazyStackedTensorDict) given that they have the same structure.

set_lazy_legacy(mode)

Sets the behaviour of some methods to a lazy transform.

lazy_legacy([allow_none])

Returns True if lazy representations will be used for selected methods.

parse_tensor_dict_string(s)

Parse a TensorDict repr to a TensorDict.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources