Prefetcher¶
- class torchdata.datapipes.iter.Prefetcher(source_datapipe, buffer_size: int = 10)¶
Prefetches elements from the source DataPipe and puts them into a buffer (functional name:
prefetch
). Prefetching performs the operations (e.g. I/O, computations) of the DataPipes up to this one ahead of time and stores the result in the buffer, ready to be consumed by the subsequent DataPipe. It has no effect aside from getting the sample ready ahead of time.This is used by
MultiProcessingReadingService
when the argumentsworker_prefetch_cnt
(for prefetching at each worker process) ormain_prefetch_cnt
(for prefetching at the main loop) are greater than 0.Beyond the built-in use cases, this can be useful to put after I/O DataPipes that have expensive I/O operations (e.g. takes a long time to request a file from a remote server).
- Parameters:
source_datapipe – IterDataPipe from which samples are prefetched
buffer_size – the size of the buffer which stores the prefetched samples
Example
>>> from torchdata.datapipes.iter import IterableWrapper >>> dp = IterableWrapper(file_paths).open_files().prefetch(5)