Attention
June 2024 Status Update: Removing DataPipes and DataLoader V2
We are re-focusing the torchdata repo to be an iterative enhancement of torch.utils.data.DataLoader. We do not plan on continuing development or maintaining the [DataPipes] and [DataLoaderV2] solutions, and they will be removed from the torchdata repo. We’ll also be revisiting the DataPipes references in pytorch/pytorch. In release torchdata==0.8.0 (July 2024) they will be marked as deprecated, and in 0.9.0 (Oct 2024) they will be deleted. Existing users are advised to pin to torchdata==0.8.0 or an older version until they are able to migrate away. Subsequent releases will not include DataPipes or DataLoaderV2. Please reach out if you suggestions or comments (please use this issue for feedback)
InMemoryCacheHolder¶
- class torchdata.datapipes.map.InMemoryCacheHolder(source_dp: MapDataPipe[T_co])¶
Stores elements from the source DataPipe in memory (functional name:
in_memory_cache
). Once an item is stored, it will remain unchanged and subsequent retrivals will return the same element. Since items fromMapDataPipe
are lazily computed, this can be used to store the results from previousMapDataPipe
and reduce the number of duplicate computations.Note
The default
cache
is aDict
. If another data structure is more suitable as cache for your use- Parameters:
source_dp – source DataPipe from which elements are read and stored in memory
Example
>>> from torchdata.datapipes.map import SequenceWrapper >>> source_dp = SequenceWrapper(range(10)) >>> cache_dp = source_dp.in_memory_cache() >>> list(cache_dp) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]