Shortcuts

ReplayBuffer

class torchrl.data.ReplayBuffer(*, storage: Storage | None = None, sampler: Sampler | None = None, writer: Writer | None = None, collate_fn: Callable | None = None, pin_memory: bool = False, prefetch: int | None = None, transform: Transform | None = None, batch_size: int | None = None)[source]

A generic, composable replay buffer class.

Keyword Arguments:
  • storage (Storage, optional) – the storage to be used. If none is provided a default ListStorage with max_size of 1_000 will be created.

  • sampler (Sampler, optional) – the sampler to be used. If none is provided, a default RandomSampler will be used.

  • writer (Writer, optional) – the writer to be used. If none is provided a default RoundRobinWriter will be used.

  • collate_fn (callable, optional) – merges a list of samples to form a mini-batch of Tensor(s)/outputs. Used when using batched loading from a map-style dataset. The default value will be decided based on the storage type.

  • pin_memory (bool) – whether pin_memory() should be called on the rb samples.

  • prefetch (int, optional) – number of next batches to be prefetched using multithreading. Defaults to None (no prefetching).

  • transform (Transform, optional) – Transform to be executed when sample() is called. To chain transforms use the Compose class. Transforms should be used with tensordict.TensorDict content. If used with other structures, the transforms should be encoded with a "data" leading key that will be used to construct a tensordict from the non-tensordict content.

  • batch_size (int, optional) –

    the batch size to be used when sample() is called. .. note:

    The batch-size can be specified at construction time via the
    ``batch_size`` argument, or at sampling time. The former should
    be preferred whenever the batch-size is consistent across the
    experiment. If the batch-size is likely to change, it can be
    passed to the :meth:`~.sample` method. This option is
    incompatible with prefetching (since this requires to know the
    batch-size in advance) as well as with samplers that have a
    ``drop_last`` argument.
    

Examples

>>> import torch
>>>
>>> from torchrl.data import ReplayBuffer, ListStorage
>>>
>>> torch.manual_seed(0)
>>> rb = ReplayBuffer(
...     storage=ListStorage(max_size=1000),
...     batch_size=5,
... )
>>> # populate the replay buffer and get the item indices
>>> data = range(10)
>>> indices = rb.extend(data)
>>> # sample will return as many elements as specified in the constructor
>>> sample = rb.sample()
>>> print(sample)
tensor([4, 9, 3, 0, 3])
>>> # Passing the batch-size to the sample method overrides the one in the constructor
>>> sample = rb.sample(batch_size=3)
>>> print(sample)
tensor([9, 7, 3])
>>> # one cans sample using the ``sample`` method or iterate over the buffer
>>> for i, batch in enumerate(rb):
...     print(i, batch)
...     if i == 3:
...         break
0 tensor([7, 3, 1, 6, 6])
1 tensor([9, 8, 6, 6, 8])
2 tensor([4, 3, 6, 9, 1])
3 tensor([4, 4, 1, 9, 9])

Replay buffers accept any kind of data. Not all storage types will work, as some expect numerical data only, but the default torchrl.data.ListStorage will:

Examples

>>> torch.manual_seed(0)
>>> buffer = ReplayBuffer(storage=ListStorage(100), collate_fn=lambda x: x)
>>> indices = buffer.extend(["a", 1, None])
>>> buffer.sample(3)
[None, 'a', None]
add(data: Any) int[source]

Add a single element to the replay buffer.

Parameters:

data (Any) – data to be added to the replay buffer

Returns:

index where the data lives in the replay buffer.

append_transform(transform: Transform) None[source]

Appends transform at the end.

Transforms are applied in order when sample is called.

Parameters:

transform (Transform) – The transform to be appended

empty()[source]

Empties the replay buffer and reset cursor to 0.

extend(data: Sequence) Tensor[source]

Extends the replay buffer with one or more elements contained in an iterable.

If present, the inverse transforms will be called.`

Parameters:

data (iterable) – collection of data to be added to the replay buffer.

Returns:

Indices of the data added to the replay buffer.

insert_transform(index: int, transform: Transform) None[source]

Inserts transform.

Transforms are executed in order when sample is called.

Parameters:
  • index (int) – Position to insert the transform.

  • transform (Transform) – The transform to be appended

sample(batch_size: int | None = None, return_info: bool = False) Any[source]

Samples a batch of data from the replay buffer.

Uses Sampler to sample indices, and retrieves them from Storage.

Parameters:
  • batch_size (int, optional) – size of data to be collected. If none is provided, this method will sample a batch-size as indicated by the sampler.

  • return_info (bool) – whether to return info. If True, the result is a tuple (data, info). If False, the result is the data.

Returns:

A batch of data selected in the replay buffer. A tuple containing this batch and info if return_info flag is set to True.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources