Attention
June 2024 Status Update: Removing DataPipes and DataLoader V2
We are re-focusing the torchdata repo to be an iterative enhancement of torch.utils.data.DataLoader. We do not plan on continuing development or maintaining the [DataPipes] and [DataLoaderV2] solutions, and they will be removed from the torchdata repo. We’ll also be revisiting the DataPipes references in pytorch/pytorch. In release torchdata==0.8.0 (July 2024) they will be marked as deprecated, and in 0.9.0 (Oct 2024) they will be deleted. Existing users are advised to pin to torchdata==0.8.0 or an older version until they are able to migrate away. Subsequent releases will not include DataPipes or DataLoaderV2. Please reach out if you suggestions or comments (please use this issue for feedback)
Prefetcher¶
- class torchdata.datapipes.iter.Prefetcher(source_datapipe, buffer_size: int = 10)¶
Prefetches elements from the source DataPipe and puts them into a buffer (functional name:
prefetch
). Prefetching performs the operations (e.g. I/O, computations) of the DataPipes up to this one ahead of time and stores the result in the buffer, ready to be consumed by the subsequent DataPipe. It has no effect aside from getting the sample ready ahead of time.This is used by
MultiProcessingReadingService
when the argumentsworker_prefetch_cnt
(for prefetching at each worker process) ormain_prefetch_cnt
(for prefetching at the main loop) are greater than 0.Beyond the built-in use cases, this can be useful to put after I/O DataPipes that have expensive I/O operations (e.g. takes a long time to request a file from a remote server).
- Parameters:
source_datapipe – IterDataPipe from which samples are prefetched
buffer_size – the size of the buffer which stores the prefetched samples
Example
>>> from torchdata.datapipes.iter import IterableWrapper >>> dp = IterableWrapper(file_paths).open_files().prefetch(5)